Jan 23 18:06:44 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 18:06:44 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 18:06:44 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 18:06:45 crc kubenswrapper[4688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:06:45 crc kubenswrapper[4688]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 18:06:45 crc kubenswrapper[4688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:06:45 crc kubenswrapper[4688]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:06:45 crc kubenswrapper[4688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 18:06:45 crc kubenswrapper[4688]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.119467 4688 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122458 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122478 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122482 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122487 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122490 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122495 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122500 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122505 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122511 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122515 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122520 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122524 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122528 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122532 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122535 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122539 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122543 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122546 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122550 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122554 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122558 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122562 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122565 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122569 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122572 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122576 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122579 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122583 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122586 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122590 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122593 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122597 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122600 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122603 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122607 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122611 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122616 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122621 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122625 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122628 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122633 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122637 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122642 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122647 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122651 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122655 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122661 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122665 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122669 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122672 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122676 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122679 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122683 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122687 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122691 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122694 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122698 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122701 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122704 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122708 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122712 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122716 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122720 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122724 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122730 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122734 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122739 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122744 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122753 4688 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122759 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.122763 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122861 4688 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122874 4688 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122886 4688 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122893 4688 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122904 4688 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122914 4688 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122922 4688 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122928 4688 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122934 4688 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122939 4688 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122945 4688 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122950 4688 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122955 4688 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122960 4688 flags.go:64] FLAG: --cgroup-root="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122965 4688 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122970 4688 flags.go:64] FLAG: --client-ca-file="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122976 4688 flags.go:64] FLAG: --cloud-config="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122981 4688 flags.go:64] FLAG: --cloud-provider="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122987 4688 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122993 4688 flags.go:64] FLAG: --cluster-domain="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.122999 4688 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123004 4688 flags.go:64] FLAG: --config-dir="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123009 4688 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123019 4688 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123031 4688 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123036 4688 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123042 4688 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123047 4688 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123052 4688 flags.go:64] FLAG: --contention-profiling="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123057 4688 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123062 4688 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123068 4688 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123076 4688 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123087 4688 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123093 4688 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123098 4688 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123104 4688 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123109 4688 flags.go:64] FLAG: --enable-server="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123114 4688 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123123 4688 flags.go:64] FLAG: --event-burst="100" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123129 4688 flags.go:64] FLAG: --event-qps="50" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123134 4688 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123139 4688 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123144 4688 flags.go:64] FLAG: --eviction-hard="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123151 4688 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123157 4688 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123162 4688 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123167 4688 flags.go:64] FLAG: --eviction-soft="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123172 4688 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123176 4688 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123185 4688 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123209 4688 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123215 4688 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123220 4688 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123224 4688 flags.go:64] FLAG: --feature-gates="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123231 4688 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123236 4688 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123241 4688 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123246 4688 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123252 4688 flags.go:64] FLAG: --healthz-port="10248" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123257 4688 flags.go:64] FLAG: --help="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123262 4688 flags.go:64] FLAG: --hostname-override="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123267 4688 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123272 4688 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123278 4688 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123286 4688 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123291 4688 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123296 4688 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123302 4688 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123308 4688 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123313 4688 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123318 4688 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123323 4688 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123328 4688 flags.go:64] FLAG: --kube-reserved="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123333 4688 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123338 4688 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123344 4688 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123349 4688 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123354 4688 flags.go:64] FLAG: --lock-file="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123359 4688 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123363 4688 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123368 4688 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123376 4688 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123381 4688 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123386 4688 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123391 4688 flags.go:64] FLAG: --logging-format="text" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123396 4688 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123402 4688 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123407 4688 flags.go:64] FLAG: --manifest-url="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123412 4688 flags.go:64] FLAG: --manifest-url-header="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123419 4688 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123424 4688 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123430 4688 flags.go:64] FLAG: --max-pods="110" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123436 4688 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123440 4688 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123446 4688 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123451 4688 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123458 4688 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123463 4688 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123469 4688 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123482 4688 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123487 4688 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123493 4688 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123498 4688 flags.go:64] FLAG: --pod-cidr="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123504 4688 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123513 4688 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123518 4688 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123524 4688 flags.go:64] FLAG: --pods-per-core="0" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123536 4688 flags.go:64] FLAG: --port="10250" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123542 4688 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123547 4688 flags.go:64] FLAG: --provider-id="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123552 4688 flags.go:64] FLAG: --qos-reserved="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123558 4688 flags.go:64] FLAG: --read-only-port="10255" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123563 4688 flags.go:64] FLAG: --register-node="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123569 4688 flags.go:64] FLAG: --register-schedulable="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123574 4688 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123584 4688 flags.go:64] FLAG: --registry-burst="10" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123589 4688 flags.go:64] FLAG: --registry-qps="5" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123595 4688 flags.go:64] FLAG: --reserved-cpus="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123600 4688 flags.go:64] FLAG: --reserved-memory="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123607 4688 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123613 4688 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123619 4688 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123624 4688 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123634 4688 flags.go:64] FLAG: --runonce="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123643 4688 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123648 4688 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123655 4688 flags.go:64] FLAG: --seccomp-default="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123661 4688 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123668 4688 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123675 4688 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123681 4688 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123687 4688 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123692 4688 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123697 4688 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123702 4688 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123709 4688 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123714 4688 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123719 4688 flags.go:64] FLAG: --system-cgroups="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123724 4688 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123735 4688 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123740 4688 flags.go:64] FLAG: --tls-cert-file="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123745 4688 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123751 4688 flags.go:64] FLAG: --tls-min-version="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123757 4688 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123761 4688 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123766 4688 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123771 4688 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123776 4688 flags.go:64] FLAG: --v="2" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123783 4688 flags.go:64] FLAG: --version="false" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123795 4688 flags.go:64] FLAG: --vmodule="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123801 4688 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.123806 4688 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123957 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123964 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123970 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123974 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123978 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123983 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123990 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.123996 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124001 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124007 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124012 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124017 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124022 4688 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124027 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124031 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124035 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124040 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124046 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124051 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124056 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124060 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124065 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124069 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124075 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124079 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124083 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124088 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124092 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124096 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124102 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124108 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124114 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124119 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124124 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124129 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124133 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124137 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124143 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124148 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124152 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124157 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124165 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124170 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124174 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124179 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124184 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124208 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124213 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124217 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124223 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124229 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124233 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124236 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124240 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124243 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124247 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124250 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124254 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124257 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124262 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124265 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124269 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124273 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124276 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124280 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124284 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124287 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124291 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124294 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124297 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.124301 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.124314 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.134329 4688 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.134366 4688 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134516 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134529 4688 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134538 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134546 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134555 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134563 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134571 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134579 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134587 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134594 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134602 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134610 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134618 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134626 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134634 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134642 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134650 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134657 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134665 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134675 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134688 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134698 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134706 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134716 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134727 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134737 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134747 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134756 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134764 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134772 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134780 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134790 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134798 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134806 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134815 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134824 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134832 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134839 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134847 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134854 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134862 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134870 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134878 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134886 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134893 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134901 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134909 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134916 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134925 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134932 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134940 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134948 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134956 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134963 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134971 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134979 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134987 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.134997 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135006 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135015 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135023 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135031 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135041 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135051 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135060 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135069 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135078 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135087 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135095 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135103 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135112 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.135125 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135373 4688 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135388 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135396 4688 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135406 4688 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135415 4688 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135423 4688 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135430 4688 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135438 4688 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135446 4688 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135455 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135463 4688 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135470 4688 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135478 4688 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135486 4688 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135494 4688 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135502 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135510 4688 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135520 4688 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135532 4688 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135540 4688 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135550 4688 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135558 4688 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135566 4688 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135575 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135585 4688 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135593 4688 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135603 4688 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135612 4688 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135620 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135628 4688 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135637 4688 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135645 4688 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135654 4688 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135662 4688 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135670 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135678 4688 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135687 4688 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135695 4688 feature_gate.go:330] unrecognized feature gate: Example Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135702 4688 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135710 4688 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135718 4688 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135726 4688 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135736 4688 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135747 4688 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135757 4688 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135768 4688 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135777 4688 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135785 4688 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135794 4688 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135802 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135810 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135819 4688 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135827 4688 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135835 4688 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135843 4688 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135851 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135859 4688 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135867 4688 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135874 4688 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135882 4688 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135889 4688 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135897 4688 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135905 4688 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135932 4688 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135940 4688 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135947 4688 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135955 4688 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135963 4688 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135971 4688 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135979 4688 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.135987 4688 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.136000 4688 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.136567 4688 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.140593 4688 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.140708 4688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.141472 4688 server.go:997] "Starting client certificate rotation" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.141508 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.141822 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-07 20:07:21.058513788 +0000 UTC Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.142893 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.152870 4688 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.154394 4688 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.156520 4688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.174395 4688 log.go:25] "Validated CRI v1 runtime API" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.201837 4688 log.go:25] "Validated CRI v1 image API" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.203575 4688 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.205872 4688 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-18-00-38-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.205905 4688 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.224768 4688 manager.go:217] Machine: {Timestamp:2026-01-23 18:06:45.223501234 +0000 UTC m=+0.219325715 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4ae7c631-0e1b-4025-81f0-d80cccca604c BootID:8158c768-9e42-4de1-98a8-b8ec3e55c3b3 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e0:de:a3 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e0:de:a3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:80:55 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:89:21:0f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:17:06:e9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:72:d2:9d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:86:e5:51:3a:d6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:52:c0:cc:91:71:1a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.225236 4688 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.225400 4688 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.226542 4688 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.226940 4688 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.227014 4688 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.227692 4688 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.227721 4688 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.228323 4688 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.228383 4688 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.228858 4688 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.229050 4688 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.229941 4688 kubelet.go:418] "Attempting to sync node with API server" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.229980 4688 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.230031 4688 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.230063 4688 kubelet.go:324] "Adding apiserver pod source" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.230088 4688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.233623 4688 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.234071 4688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.250139 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.250296 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.268522 4688 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.268947 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.269033 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269307 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269340 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269350 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269360 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269376 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269385 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269394 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269410 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269421 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269431 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269475 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269488 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.269516 4688 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.270011 4688 server.go:1280] "Started kubelet" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.270329 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.270586 4688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.270766 4688 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.271073 4688 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:06:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.273135 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.273302 4688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.273334 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:28:42.860317791 +0000 UTC Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.273415 4688 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.273425 4688 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.273761 4688 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.273530 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.213:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d6e6433b86cf9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 18:06:45.269982457 +0000 UTC m=+0.265806908,LastTimestamp:2026-01-23 18:06:45.269982457 +0000 UTC m=+0.265806908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.274033 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.274113 4688 server.go:460] "Adding debug handlers to kubelet server" Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.274582 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.274639 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.274726 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="200ms" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.278896 4688 factory.go:55] Registering systemd factory Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.278926 4688 factory.go:221] Registration of the systemd container factory successfully Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.279248 4688 factory.go:153] Registering CRI-O factory Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.279262 4688 factory.go:221] Registration of the crio container factory successfully Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.279325 4688 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.279348 4688 factory.go:103] Registering Raw factory Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.279364 4688 manager.go:1196] Started watching for new ooms in manager Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.280433 4688 manager.go:319] Starting recovery of all containers Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.302692 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.302901 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.302933 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.302960 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.302985 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303013 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303029 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303047 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303116 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303155 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303180 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303226 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303249 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303296 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303324 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303343 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303365 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303428 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303447 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303469 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303488 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303549 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303596 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303615 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303639 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303707 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303753 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.303778 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304376 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304440 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304459 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304473 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304486 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304498 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304510 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304524 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304536 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304556 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304569 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304582 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304594 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304607 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304620 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304631 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304644 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304655 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304669 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304684 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304699 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304711 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304724 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304737 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304758 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304777 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304793 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304807 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304820 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304832 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304845 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304857 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304870 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304883 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304895 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304907 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304918 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304932 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304944 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304957 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304969 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304980 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.304992 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305006 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305063 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305077 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305090 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305103 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305116 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305129 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305143 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305155 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305169 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305183 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305213 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305227 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305241 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305255 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305267 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305280 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305295 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305315 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305328 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305341 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305354 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305368 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305382 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305394 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305408 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305421 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305435 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305448 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305461 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305475 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305487 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305500 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305525 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305538 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305552 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305569 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305583 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305598 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305613 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305633 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305650 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305669 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305688 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305701 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305720 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305736 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305748 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305761 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305774 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305786 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305806 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305819 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305831 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305843 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305857 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305870 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305884 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305904 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305917 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305929 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305941 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305954 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305965 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305978 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.305990 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306001 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306016 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306029 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306042 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306054 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306067 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306084 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306097 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306110 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306122 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306134 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306148 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306161 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306173 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306221 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306236 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306250 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306263 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306276 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306295 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306308 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306319 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306331 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306345 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306361 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306373 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306386 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306398 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306411 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306423 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306462 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306476 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306488 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306507 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306519 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306531 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306547 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306559 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.306572 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307269 4688 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307295 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307308 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307322 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307335 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307349 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307368 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307382 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307403 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307415 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307438 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307450 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307461 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307474 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307485 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307497 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307508 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307522 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307534 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307545 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307573 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307586 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307598 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307609 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307621 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307633 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307651 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307664 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307675 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307686 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307700 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307719 4688 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307730 4688 reconstruct.go:97] "Volume reconstruction finished" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.307739 4688 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.318545 4688 manager.go:324] Recovery completed Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.329684 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.352428 4688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.354935 4688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.354982 4688 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.355023 4688 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.355088 4688 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:06:45 crc kubenswrapper[4688]: W0123 18:06:45.356711 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.356853 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.374155 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.428443 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.428486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.428495 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.430454 4688 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.430490 4688 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.430526 4688 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.446836 4688 policy_none.go:49] "None policy: Start" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.448157 4688 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.448262 4688 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.455155 4688 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.474588 4688 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.475504 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="400ms" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.503133 4688 manager.go:334] "Starting Device Plugin manager" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.503691 4688 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.503716 4688 server.go:79] "Starting device plugin registration server" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.504289 4688 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.504309 4688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.504615 4688 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.504786 4688 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.504810 4688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.510948 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.605015 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.606279 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.606310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.606319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.606390 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.606921 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.213:6443: connect: connection refused" node="crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.656227 4688 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.656322 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.657943 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.657970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.657978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.658216 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.658483 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.658705 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.659331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.659400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.659412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.659857 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.660062 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.660147 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.660501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.660534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.660551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661199 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661373 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661461 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661731 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.661824 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.662155 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.662177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.662206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.662318 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.662715 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.662746 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663069 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663091 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663234 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663260 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663562 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.663572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.666932 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.667019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.667034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.667804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.667827 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.667837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712227 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712325 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712436 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712557 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712624 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712707 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712753 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712778 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712828 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712851 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712924 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.712985 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.713018 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.713037 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.713072 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.807690 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.809991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.810039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.810053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.810082 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.810649 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.213:6443: connect: connection refused" node="crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813788 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813859 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813891 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813906 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813921 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813951 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813941 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.813987 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814003 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814048 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814083 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814071 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814103 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814141 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814160 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814185 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814254 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814275 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814297 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814371 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814413 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814444 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814444 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814542 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814561 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.814611 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:45 crc kubenswrapper[4688]: E0123 18:06:45.876504 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="800ms" Jan 23 18:06:45 crc kubenswrapper[4688]: I0123 18:06:45.989432 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.013838 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.019762 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.028533 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a241575e054b662e3ea22b1a59dc84e08b9e7231aba5989548949628cebb61fb WatchSource:0}: Error finding container a241575e054b662e3ea22b1a59dc84e08b9e7231aba5989548949628cebb61fb: Status 404 returned error can't find the container with id a241575e054b662e3ea22b1a59dc84e08b9e7231aba5989548949628cebb61fb Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.034035 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.037800 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2c569444f1e0be26dab27ca8730653cf32aea0f527363ebc6bad1aaaf7098326 WatchSource:0}: Error finding container 2c569444f1e0be26dab27ca8730653cf32aea0f527363ebc6bad1aaaf7098326: Status 404 returned error can't find the container with id 2c569444f1e0be26dab27ca8730653cf32aea0f527363ebc6bad1aaaf7098326 Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.039256 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.041505 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0647773086b5392a7a7cd3a5932d6aba8bc22d568d74cfcbea8f7d17232af268 WatchSource:0}: Error finding container 0647773086b5392a7a7cd3a5932d6aba8bc22d568d74cfcbea8f7d17232af268: Status 404 returned error can't find the container with id 0647773086b5392a7a7cd3a5932d6aba8bc22d568d74cfcbea8f7d17232af268 Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.056528 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0aef723a616cf6951d40c6a45714bf3b801536ee2e31183d78d392f5f27b1e44 WatchSource:0}: Error finding container 0aef723a616cf6951d40c6a45714bf3b801536ee2e31183d78d392f5f27b1e44: Status 404 returned error can't find the container with id 0aef723a616cf6951d40c6a45714bf3b801536ee2e31183d78d392f5f27b1e44 Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.057897 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7960d217009161508602a2a6b98886ec23bf36393c4226f2afd9a9b7e989cf09 WatchSource:0}: Error finding container 7960d217009161508602a2a6b98886ec23bf36393c4226f2afd9a9b7e989cf09: Status 404 returned error can't find the container with id 7960d217009161508602a2a6b98886ec23bf36393c4226f2afd9a9b7e989cf09 Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.058950 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:46 crc kubenswrapper[4688]: E0123 18:06:46.059052 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.211047 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.212634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.212679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.212691 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.212729 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:06:46 crc kubenswrapper[4688]: E0123 18:06:46.213549 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.213:6443: connect: connection refused" node="crc" Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.271866 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.273773 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:31:52.863013716 +0000 UTC Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.430004 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0647773086b5392a7a7cd3a5932d6aba8bc22d568d74cfcbea8f7d17232af268"} Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.432001 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c569444f1e0be26dab27ca8730653cf32aea0f527363ebc6bad1aaaf7098326"} Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.434027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a241575e054b662e3ea22b1a59dc84e08b9e7231aba5989548949628cebb61fb"} Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.435985 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7960d217009161508602a2a6b98886ec23bf36393c4226f2afd9a9b7e989cf09"} Jan 23 18:06:46 crc kubenswrapper[4688]: I0123 18:06:46.438509 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0aef723a616cf6951d40c6a45714bf3b801536ee2e31183d78d392f5f27b1e44"} Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.676451 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:46 crc kubenswrapper[4688]: E0123 18:06:46.676998 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:46 crc kubenswrapper[4688]: E0123 18:06:46.677418 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="1.6s" Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.756781 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:46 crc kubenswrapper[4688]: E0123 18:06:46.756876 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:46 crc kubenswrapper[4688]: W0123 18:06:46.892476 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:46 crc kubenswrapper[4688]: E0123 18:06:46.892583 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.013836 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.015145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.015167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.015175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.015212 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:06:47 crc kubenswrapper[4688]: E0123 18:06:47.015636 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.213:6443: connect: connection refused" node="crc" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.268934 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 18:06:47 crc kubenswrapper[4688]: E0123 18:06:47.269874 4688 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.271134 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.274255 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:08:14.108499215 +0000 UTC Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.444349 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.444398 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.444415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.444427 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.444441 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.445855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.445927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.445940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.446075 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b4ee3aca724412e0d4d6af85fcea443fad7f7b922931f1dd9e40d4cf0db602c5" exitCode=0 Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.446166 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b4ee3aca724412e0d4d6af85fcea443fad7f7b922931f1dd9e40d4cf0db602c5"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.446173 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.447030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.447085 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.447102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.448327 4688 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5" exitCode=0 Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.448388 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.448406 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.449871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.449914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.449929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.450141 4688 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787" exitCode=0 Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.450215 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.450243 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.451153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.451216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.451232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.452113 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772" exitCode=0 Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.452141 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772"} Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.452262 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.452962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.452993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.453008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.454748 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.455812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.455832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:47 crc kubenswrapper[4688]: I0123 18:06:47.455844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.274049 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.274385 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:55:37.922480897 +0000 UTC Jan 23 18:06:48 crc kubenswrapper[4688]: E0123 18:06:48.278412 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="3.2s" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.457121 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="80d49ec387b398f97f8780a86c960996c4e9858f5fbc123a7851ab9befa49735" exitCode=0 Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.457175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"80d49ec387b398f97f8780a86c960996c4e9858f5fbc123a7851ab9befa49735"} Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.457310 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.458986 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"959d36fc44bef9ec6f26f5c4838620200e14b4bfcdcb049544374118a5ec07f3"} Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.460976 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72"} Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.461048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c"} Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.462312 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.462454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.462517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.462530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.463322 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194"} Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.463356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477"} Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.463331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.463389 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.463406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.463419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.464311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.464335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.464344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:48 crc kubenswrapper[4688]: W0123 18:06:48.523007 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:48 crc kubenswrapper[4688]: E0123 18:06:48.523215 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.616873 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.618004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.618061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.618076 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:48 crc kubenswrapper[4688]: I0123 18:06:48.618102 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:06:48 crc kubenswrapper[4688]: E0123 18:06:48.618747 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.213:6443: connect: connection refused" node="crc" Jan 23 18:06:48 crc kubenswrapper[4688]: W0123 18:06:48.797804 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:48 crc kubenswrapper[4688]: E0123 18:06:48.797893 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:49 crc kubenswrapper[4688]: W0123 18:06:49.102538 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:49 crc kubenswrapper[4688]: E0123 18:06:49.102630 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.271579 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.274819 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:23:46.629935844 +0000 UTC Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.467599 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a"} Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.467729 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.468477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.468512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.468523 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.471716 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8a066b37f365447fb11871320a275381569c0b3ca5e80e70a50d3e8fd5a4942"} Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.471745 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6"} Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.471758 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1"} Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.472208 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.472862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.472889 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.472900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.473457 4688 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="823bf6b109f2f8e4a0c783d10700dd1f2d0d31eef9d52a940425902d15aa4f0b" exitCode=0 Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.473482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"823bf6b109f2f8e4a0c783d10700dd1f2d0d31eef9d52a940425902d15aa4f0b"} Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.473526 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.473532 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.474357 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.474389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.474399 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.474819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.474835 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:49 crc kubenswrapper[4688]: I0123 18:06:49.474848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:49 crc kubenswrapper[4688]: W0123 18:06:49.512494 4688 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.213:6443: connect: connection refused Jan 23 18:06:49 crc kubenswrapper[4688]: E0123 18:06:49.512591 4688 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.213:6443: connect: connection refused" logger="UnhandledError" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.275748 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:24:28.5369229 +0000 UTC Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479407 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"10bdc7b64199b3912157c85a3a3aa571908a5173ce7862968157d619e85d6502"} Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e273bd11d82d71d1822833e7353c8e5814701f06d88c889dd879db7bbb78c8fb"} Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479468 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4c165abde4ef74c04856d3ff76968e668cdc7325586393c496541fa240735b9"} Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479479 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e06b0c8550c864489e4ec6084321621323ac746ab01a98d9cf45ad6229080d61"} Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479493 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479500 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479575 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.479663 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.480303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.480333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.480344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.481113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.481127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.481135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.535495 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:50 crc kubenswrapper[4688]: I0123 18:06:50.969444 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.276290 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:33:39.166320288 +0000 UTC Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.487060 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9dd02091d8fd112c08e2f5556b04f7c62789b6df15a891d893de8a419b690aea"} Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.487146 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.487300 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.487437 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.487943 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.487975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.487987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.488331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.488350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.488358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.488556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.488678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.488765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.522461 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.522683 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.524240 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.524276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.524289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.656625 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.818890 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.821147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.821226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.821241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:51 crc kubenswrapper[4688]: I0123 18:06:51.821270 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.277366 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:42:36.35803226 +0000 UTC Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.490240 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.490372 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.491495 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.491532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.491543 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.492254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.492297 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.492306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:52 crc kubenswrapper[4688]: I0123 18:06:52.991779 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 18:06:53 crc kubenswrapper[4688]: I0123 18:06:53.277912 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:32:49.478091799 +0000 UTC Jan 23 18:06:53 crc kubenswrapper[4688]: I0123 18:06:53.493091 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:53 crc kubenswrapper[4688]: I0123 18:06:53.494461 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:53 crc kubenswrapper[4688]: I0123 18:06:53.494499 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:53 crc kubenswrapper[4688]: I0123 18:06:53.494511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:53 crc kubenswrapper[4688]: I0123 18:06:53.595477 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.279004 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:07:40.156051688 +0000 UTC Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.495214 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.496126 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.496197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.496209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.523138 4688 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.523313 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.797862 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.798151 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.799913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.799980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.799992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:54 crc kubenswrapper[4688]: I0123 18:06:54.961931 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:55 crc kubenswrapper[4688]: I0123 18:06:55.279837 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:10:49.520447468 +0000 UTC Jan 23 18:06:55 crc kubenswrapper[4688]: I0123 18:06:55.497453 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:55 crc kubenswrapper[4688]: I0123 18:06:55.498519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:55 crc kubenswrapper[4688]: I0123 18:06:55.498564 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:55 crc kubenswrapper[4688]: I0123 18:06:55.498572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:55 crc kubenswrapper[4688]: E0123 18:06:55.511115 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 18:06:55 crc kubenswrapper[4688]: I0123 18:06:55.581454 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:55 crc kubenswrapper[4688]: I0123 18:06:55.587675 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:56 crc kubenswrapper[4688]: I0123 18:06:56.280753 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:20:23.989291885 +0000 UTC Jan 23 18:06:56 crc kubenswrapper[4688]: I0123 18:06:56.499951 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:56 crc kubenswrapper[4688]: I0123 18:06:56.501229 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:56 crc kubenswrapper[4688]: I0123 18:06:56.501287 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:56 crc kubenswrapper[4688]: I0123 18:06:56.501303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:56 crc kubenswrapper[4688]: I0123 18:06:56.504304 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:06:57 crc kubenswrapper[4688]: I0123 18:06:57.280887 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:34:18.743994272 +0000 UTC Jan 23 18:06:57 crc kubenswrapper[4688]: I0123 18:06:57.501749 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:57 crc kubenswrapper[4688]: I0123 18:06:57.502723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:57 crc kubenswrapper[4688]: I0123 18:06:57.502757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:57 crc kubenswrapper[4688]: I0123 18:06:57.502766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:58 crc kubenswrapper[4688]: I0123 18:06:58.281980 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:17:47.950866328 +0000 UTC Jan 23 18:06:58 crc kubenswrapper[4688]: I0123 18:06:58.504095 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:06:58 crc kubenswrapper[4688]: I0123 18:06:58.505545 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:06:58 crc kubenswrapper[4688]: I0123 18:06:58.505582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:06:58 crc kubenswrapper[4688]: I0123 18:06:58.505598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:06:59 crc kubenswrapper[4688]: I0123 18:06:59.282836 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:40:10.259818863 +0000 UTC Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.281915 4688 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.283900 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:54:15.490062155 +0000 UTC Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.364539 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54658->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.364609 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54658->192.168.126.11:17697: read: connection reset by peer" Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.511238 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.513419 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8a066b37f365447fb11871320a275381569c0b3ca5e80e70a50d3e8fd5a4942" exitCode=255 Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.513499 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e8a066b37f365447fb11871320a275381569c0b3ca5e80e70a50d3e8fd5a4942"} Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.513690 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.514577 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.514609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.514618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.515076 4688 scope.go:117] "RemoveContainer" containerID="e8a066b37f365447fb11871320a275381569c0b3ca5e80e70a50d3e8fd5a4942" Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.536489 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 18:07:00 crc kubenswrapper[4688]: I0123 18:07:00.536654 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.285044 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:11:49.29583124 +0000 UTC Jan 23 18:07:01 crc kubenswrapper[4688]: E0123 18:07:01.479336 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.525247 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.527816 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe"} Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.527964 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.528736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.528777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.528791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.604113 4688 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 18:07:01 crc kubenswrapper[4688]: I0123 18:07:01.604229 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 18:07:02 crc kubenswrapper[4688]: I0123 18:07:02.285683 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:59:50.322200548 +0000 UTC Jan 23 18:07:03 crc kubenswrapper[4688]: I0123 18:07:03.286845 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:36:58.467366354 +0000 UTC Jan 23 18:07:03 crc kubenswrapper[4688]: I0123 18:07:03.622163 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 18:07:03 crc kubenswrapper[4688]: I0123 18:07:03.622384 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:07:03 crc kubenswrapper[4688]: I0123 18:07:03.623638 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:03 crc kubenswrapper[4688]: I0123 18:07:03.623668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:03 crc kubenswrapper[4688]: I0123 18:07:03.623679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:03 crc kubenswrapper[4688]: I0123 18:07:03.642895 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 18:07:04 crc kubenswrapper[4688]: I0123 18:07:04.287254 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:44:08.899315743 +0000 UTC Jan 23 18:07:04 crc kubenswrapper[4688]: I0123 18:07:04.524012 4688 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 18:07:04 crc kubenswrapper[4688]: I0123 18:07:04.524072 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 18:07:04 crc kubenswrapper[4688]: I0123 18:07:04.534561 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:07:04 crc kubenswrapper[4688]: I0123 18:07:04.535350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:04 crc kubenswrapper[4688]: I0123 18:07:04.535398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:04 crc kubenswrapper[4688]: I0123 18:07:04.535409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.288047 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:05:04.706924421 +0000 UTC Jan 23 18:07:05 crc kubenswrapper[4688]: E0123 18:07:05.511378 4688 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.544392 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.544596 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.544749 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.546092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.546142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.546157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:05 crc kubenswrapper[4688]: I0123 18:07:05.550544 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.288938 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:37:24.560678789 +0000 UTC Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.489471 4688 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.534226 4688 trace.go:236] Trace[1259676139]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:06:52.956) (total time: 13577ms): Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[1259676139]: ---"Objects listed" error: 13577ms (18:07:06.534) Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[1259676139]: [13.577390391s] [13.577390391s] END Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.534268 4688 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 18:07:06 crc kubenswrapper[4688]: E0123 18:07:06.538963 4688 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.539942 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.541453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.541530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.541555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.589617 4688 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.589988 4688 trace.go:236] Trace[2095113747]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:06:53.304) (total time: 13285ms): Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[2095113747]: ---"Objects listed" error: 13285ms (18:07:06.589) Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[2095113747]: [13.285723147s] [13.285723147s] END Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.590083 4688 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.602764 4688 trace.go:236] Trace[307928796]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:06:53.842) (total time: 12759ms): Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[307928796]: ---"Objects listed" error: 12759ms (18:07:06.602) Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[307928796]: [12.759770991s] [12.759770991s] END Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.602800 4688 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.604172 4688 trace.go:236] Trace[1589079909]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 18:06:53.911) (total time: 12692ms): Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[1589079909]: ---"Objects listed" error: 12692ms (18:07:06.604) Jan 23 18:07:06 crc kubenswrapper[4688]: Trace[1589079909]: [12.692605752s] [12.692605752s] END Jan 23 18:07:06 crc kubenswrapper[4688]: I0123 18:07:06.604236 4688 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.285339 4688 apiserver.go:52] "Watching apiserver" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.289060 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:51:23.672493868 +0000 UTC Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.292054 4688 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.292396 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.292792 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.292845 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.292871 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.293000 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.293115 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.293359 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.293488 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.293550 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.293604 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.295058 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.295219 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.295060 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.295866 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.295888 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.296101 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.296107 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.296893 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.298159 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.374749 4688 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.381575 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.396796 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400076 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400107 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400133 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400150 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400168 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400197 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400282 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400300 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400315 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400335 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400352 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400366 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400394 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400462 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400477 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400506 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400520 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400536 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400550 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400565 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400599 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400631 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400649 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400664 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400679 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400719 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400735 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400751 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400798 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400816 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400832 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400848 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400845 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400866 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400943 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400965 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.400987 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401021 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401037 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401054 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401072 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401165 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401206 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401228 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401246 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401260 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401278 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401295 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401309 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401332 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401349 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401368 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401388 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401391 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401406 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401514 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401578 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401588 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401649 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401653 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401677 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401735 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401762 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.401935 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402024 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402051 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402088 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402108 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402137 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402229 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402260 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402292 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402366 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402489 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402524 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402554 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402606 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402632 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402682 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402708 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402784 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402817 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402870 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402895 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402941 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402966 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403013 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403034 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403056 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403103 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403127 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403149 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403204 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403227 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403314 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403361 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403384 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403429 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403473 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403519 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403542 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403588 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403613 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403638 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403685 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403708 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403778 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403916 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403942 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403966 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.404015 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.404037 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.404081 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.404108 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402605 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407659 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402627 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402667 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402797 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402949 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.402998 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403077 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403111 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403237 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403313 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403586 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403591 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403614 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403746 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403754 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403906 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.403934 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.404070 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.404153 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:07:07.904136949 +0000 UTC m=+22.899961390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.404857 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.404885 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.405081 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.405289 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.405413 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.405476 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.405799 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.405984 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.406143 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.406776 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.406804 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.406823 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.406948 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407037 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407135 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407119 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407292 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407511 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407603 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407622 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407782 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.407812 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.408025 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.408101 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.408237 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.408364 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.408590 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.408801 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.408894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.409094 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.409306 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.409339 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.409491 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.409578 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.409648 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.409830 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.410008 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.410288 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.410664 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.410681 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.410820 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.410928 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411032 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411083 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411367 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411505 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411525 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411580 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411600 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411769 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411924 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.411972 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.412126 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.412316 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.412355 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.412524 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.412476 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.412698 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.435658 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.435745 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.435891 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436055 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436336 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436565 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436817 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437023 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437350 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437734 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437962 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.438748 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.435644 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.435839 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436011 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436519 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436638 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436731 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437030 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.436984 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437127 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437225 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437832 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.443151 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.443306 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.444048 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.444085 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.444718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.445032 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.445416 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.445811 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.445887 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.437293 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.444441 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.445679 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.438931 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.446899 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.446990 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447104 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447305 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447296 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447314 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447329 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447418 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447965 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.448121 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.450229 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.450340 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.450334 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.454396 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.447989 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.454755 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455273 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455539 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455567 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455598 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455626 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455649 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455671 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455693 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455717 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455739 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455760 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455784 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455782 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455804 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455827 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455849 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455873 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455896 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455921 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455946 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455968 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455990 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.455894 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.456085 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.456455 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.456549 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.456893 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.456967 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457023 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457221 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457232 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457508 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457554 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457600 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457644 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457670 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457698 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457731 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457790 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457816 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457837 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457859 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457865 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457880 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457904 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457925 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.457945 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458080 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458103 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458236 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458281 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458303 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458327 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458358 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458382 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458407 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458430 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458469 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458492 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458516 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458540 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458565 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458588 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458610 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458632 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458657 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458679 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458700 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458722 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458744 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458765 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458788 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458812 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458833 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.458854 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459004 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459027 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459048 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459068 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459090 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459112 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459133 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459159 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459235 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459257 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459294 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459320 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459347 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459370 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459396 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459423 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459450 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459473 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459544 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459585 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459615 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459638 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459685 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459712 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.461982 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462009 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462023 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462040 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462054 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462069 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462082 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462095 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462108 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462121 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462134 4688 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462146 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462159 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462173 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462202 4688 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462217 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462229 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462241 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462253 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462265 4688 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462277 4688 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462289 4688 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462300 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462314 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462326 4688 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462339 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462351 4688 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462363 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462375 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462387 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462398 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462410 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462422 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462432 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462442 4688 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462453 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462495 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462507 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462518 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462531 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462543 4688 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462554 4688 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462565 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462577 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462589 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462600 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462612 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462623 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462633 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462643 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462655 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462666 4688 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462676 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462687 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462699 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462711 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462722 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462733 4688 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462744 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462755 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462767 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462778 4688 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462788 4688 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462799 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462811 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462821 4688 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462832 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462842 4688 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462854 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462864 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462876 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462887 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462899 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462910 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462921 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462931 4688 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462942 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462954 4688 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462964 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462974 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462986 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462996 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463007 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463017 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463027 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463037 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463047 4688 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463058 4688 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463069 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463082 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463093 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463103 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463113 4688 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463123 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463133 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463143 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463154 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463164 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463175 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463205 4688 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463215 4688 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463225 4688 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463237 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463247 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463258 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463268 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463278 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463288 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463299 4688 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463310 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463320 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463332 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463348 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463360 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463372 4688 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463382 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463392 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463403 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463413 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463423 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463434 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463444 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463455 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463466 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463478 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463490 4688 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463500 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463511 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463521 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463531 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463541 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463553 4688 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463563 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463574 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463585 4688 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463597 4688 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463608 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463620 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459735 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459775 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.459863 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.460237 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.460766 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.460973 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.461548 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.461655 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.461771 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.461858 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462067 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462142 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462282 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462524 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462562 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462697 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.462888 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463288 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.463801 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.464287 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.464544 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.464802 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.465026 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.465237 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.465428 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.465632 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.465935 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.466516 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.468240 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.468478 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.468701 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.469015 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.469476 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.470224 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.470599 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.470951 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.471245 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.471416 4688 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.471949 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.471476 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.471569 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.473326 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:07.973303172 +0000 UTC m=+22.969127703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.471718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.472179 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.472262 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.472563 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.472592 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.472647 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.474090 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:07.974079853 +0000 UTC m=+22.969904294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.474400 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.472771 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.473203 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.474634 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.476007 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.476590 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.476843 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.479559 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.482664 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.482750 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.483242 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.488532 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.488642 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.488721 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.488861 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:07.988836447 +0000 UTC m=+22.984660878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.493179 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.493431 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.493446 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.493457 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.493496 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:07.993481674 +0000 UTC m=+22.989306115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.496848 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.496890 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.497177 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.497417 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.504471 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.506224 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.510744 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.512135 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.515259 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.517531 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.524870 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.527123 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564677 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564768 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564778 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564787 4688 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564796 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564804 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564813 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564821 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564829 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564827 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564838 4688 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564891 4688 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564905 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564917 4688 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564929 4688 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564940 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564951 4688 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564963 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564975 4688 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564986 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564997 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565008 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565019 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565030 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565040 4688 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565050 4688 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565060 4688 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565068 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565078 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565104 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565112 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565121 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565129 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565138 4688 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565146 4688 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565154 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565162 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565170 4688 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565177 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565201 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565210 4688 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565218 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565226 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565235 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565243 4688 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565252 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565260 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565268 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565276 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565284 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565292 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565302 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565310 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565318 4688 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565325 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565334 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565342 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.565350 4688 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.564873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.571620 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.571953 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.573000 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe" exitCode=255 Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.573030 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe"} Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.573062 4688 scope.go:117] "RemoveContainer" containerID="e8a066b37f365447fb11871320a275381569c0b3ca5e80e70a50d3e8fd5a4942" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.596344 4688 scope.go:117] "RemoveContainer" containerID="5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe" Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.596957 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.597280 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.605316 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.612578 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.636885 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.651166 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.660411 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.668953 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.680812 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.756308 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 18:07:07 crc kubenswrapper[4688]: W0123 18:07:07.774460 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-da36a6d8a70815cd498c5a9462a9d2f9c664d4f0fe5f90b020ef6e4ca09d8590 WatchSource:0}: Error finding container da36a6d8a70815cd498c5a9462a9d2f9c664d4f0fe5f90b020ef6e4ca09d8590: Status 404 returned error can't find the container with id da36a6d8a70815cd498c5a9462a9d2f9c664d4f0fe5f90b020ef6e4ca09d8590 Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.790803 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 18:07:07 crc kubenswrapper[4688]: W0123 18:07:07.901661 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e800aefaa2dd5bc8b2de30649f9e22a6e956450beb4e4967d2740c61e5605486 WatchSource:0}: Error finding container e800aefaa2dd5bc8b2de30649f9e22a6e956450beb4e4967d2740c61e5605486: Status 404 returned error can't find the container with id e800aefaa2dd5bc8b2de30649f9e22a6e956450beb4e4967d2740c61e5605486 Jan 23 18:07:07 crc kubenswrapper[4688]: I0123 18:07:07.967847 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:07 crc kubenswrapper[4688]: E0123 18:07:07.968105 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:07:08.968070795 +0000 UTC m=+23.963895236 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.068736 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.068853 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.068923 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.068954 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.068970 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.068972 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069043 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:09.069020688 +0000 UTC m=+24.064845139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069071 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:09.069061149 +0000 UTC m=+24.064885610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.069169 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.069272 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069356 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069355 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069391 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069399 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:09.069384698 +0000 UTC m=+24.065209149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069402 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.069460 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:09.06944407 +0000 UTC m=+24.065268511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.289635 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:11:24.773025645 +0000 UTC Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.538489 4688 csr.go:261] certificate signing request csr-dp5zg is approved, waiting to be issued Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.575892 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134"} Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.575928 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"968fc42be7c9833d8685b6be27bbf8d48823decdec457856dd70d7161eb4a603"} Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.577686 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.580283 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e800aefaa2dd5bc8b2de30649f9e22a6e956450beb4e4967d2740c61e5605486"} Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.582539 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"da36a6d8a70815cd498c5a9462a9d2f9c664d4f0fe5f90b020ef6e4ca09d8590"} Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.590848 4688 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.591128 4688 scope.go:117] "RemoveContainer" containerID="5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe" Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.591344 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.594318 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.596780 4688 csr.go:257] certificate signing request csr-dp5zg is issued Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.611605 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8a066b37f365447fb11871320a275381569c0b3ca5e80e70a50d3e8fd5a4942\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:00Z\\\",\\\"message\\\":\\\"W0123 18:06:49.485451 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0123 18:06:49.485984 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769191609 cert, and key in /tmp/serving-cert-2043826348/serving-signer.crt, /tmp/serving-cert-2043826348/serving-signer.key\\\\nI0123 18:06:49.762719 1 observer_polling.go:159] Starting file observer\\\\nW0123 18:06:49.767595 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 18:06:49.771208 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:06:49.772645 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2043826348/tls.crt::/tmp/serving-cert-2043826348/tls.key\\\\\\\"\\\\nF0123 18:07:00.284360 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.797908 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.843059 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.872575 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.893462 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.925308 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.951796 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.966422 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.976925 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:08 crc kubenswrapper[4688]: E0123 18:07:08.977098 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:07:10.977074514 +0000 UTC m=+25.972898965 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.979294 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:08 crc kubenswrapper[4688]: I0123 18:07:08.989027 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.000639 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.022458 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.033337 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.077797 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.077842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.077863 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.077885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.077945 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.077958 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.077972 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.077989 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.078006 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:11.077992056 +0000 UTC m=+26.073816497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.078019 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.078025 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:11.078013817 +0000 UTC m=+26.073838258 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.078042 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.077943 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.078054 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.078091 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:11.078078918 +0000 UTC m=+26.073903359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.078104 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:11.078098669 +0000 UTC m=+26.073923110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.249484 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fw8bl"] Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.249861 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:09 crc kubenswrapper[4688]: W0123 18:07:09.252169 4688 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.252234 4688 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 18:07:09 crc kubenswrapper[4688]: W0123 18:07:09.252373 4688 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.252401 4688 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 18:07:09 crc kubenswrapper[4688]: W0123 18:07:09.252454 4688 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.252471 4688 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.289791 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:52:08.199859062 +0000 UTC Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.310513 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.316116 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.326898 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.356108 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.356177 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.356177 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.356253 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.356343 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.356429 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.360482 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.361301 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.362130 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.362858 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.363623 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.365394 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.366131 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.366847 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.368066 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.368759 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.369386 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.370052 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.370947 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.372255 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.372885 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.374130 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.374897 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.375691 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.376711 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.377520 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.378794 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.379505 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.380059 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.381068 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/988366e9-b0b9-4785-ad68-185a42d66bc8-hosts-file\") pod \"node-resolver-fw8bl\" (UID: \"988366e9-b0b9-4785-ad68-185a42d66bc8\") " pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.381086 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.381233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vw27\" (UniqueName: \"kubernetes.io/projected/988366e9-b0b9-4785-ad68-185a42d66bc8-kube-api-access-8vw27\") pod \"node-resolver-fw8bl\" (UID: \"988366e9-b0b9-4785-ad68-185a42d66bc8\") " pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.382156 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.382701 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.383540 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.384492 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.385130 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.387143 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.387926 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.389050 4688 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.389166 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.390480 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.392336 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.393466 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.394164 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.396589 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.402419 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.403139 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.404607 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.405453 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.406038 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.407554 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.408685 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.409298 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.410098 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.410631 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.411591 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.413740 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.414385 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.414938 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.416000 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.416768 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.418902 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.419542 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.421617 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.457170 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.482460 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/988366e9-b0b9-4785-ad68-185a42d66bc8-hosts-file\") pod \"node-resolver-fw8bl\" (UID: \"988366e9-b0b9-4785-ad68-185a42d66bc8\") " pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.482520 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vw27\" (UniqueName: \"kubernetes.io/projected/988366e9-b0b9-4785-ad68-185a42d66bc8-kube-api-access-8vw27\") pod \"node-resolver-fw8bl\" (UID: \"988366e9-b0b9-4785-ad68-185a42d66bc8\") " pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.482601 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/988366e9-b0b9-4785-ad68-185a42d66bc8-hosts-file\") pod \"node-resolver-fw8bl\" (UID: \"988366e9-b0b9-4785-ad68-185a42d66bc8\") " pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.483975 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.500173 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.587633 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1"} Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.587693 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615"} Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.588343 4688 scope.go:117] "RemoveContainer" containerID="5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe" Jan 23 18:07:09 crc kubenswrapper[4688]: E0123 18:07:09.588526 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.598437 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 18:02:08 +0000 UTC, rotation deadline is 2026-11-05 01:32:28.110889686 +0000 UTC Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.598481 4688 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6847h25m18.512410833s for next certificate rotation Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.601317 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.614307 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.627937 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.634938 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.648617 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.661049 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.668855 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gf4sc"] Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.669109 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nkhx2"] Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.669370 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.669640 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.682021 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.682676 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6nsp2"] Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.683286 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.683562 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.684408 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.684886 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.685077 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.685239 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.685355 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.685466 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.685603 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.685721 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.688989 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.692003 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.692525 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.704138 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.729077 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-os-release\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787723 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-cni-multus\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787749 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-multus-certs\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787772 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/282fed6d-4a28-4498-add6-0240e6414dc4-mcd-auth-proxy-config\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-system-cni-dir\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787814 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-cni-bin\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787847 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-os-release\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787870 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-system-cni-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787890 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-hostroot\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787911 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86jc5\" (UniqueName: \"kubernetes.io/projected/282fed6d-4a28-4498-add6-0240e6414dc4-kube-api-access-86jc5\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787933 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j757h\" (UniqueName: \"kubernetes.io/projected/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-kube-api-access-j757h\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787954 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdq66\" (UniqueName: \"kubernetes.io/projected/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-kube-api-access-bdq66\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787976 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.787999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788019 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-socket-dir-parent\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788049 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-cni-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788067 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-cnibin\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788084 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-etc-kubernetes\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788103 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cni-binary-copy\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788123 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-kubelet\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788141 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-netns\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788159 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-daemon-config\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788179 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cnibin\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-k8s-cni-cncf-io\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788265 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/282fed6d-4a28-4498-add6-0240e6414dc4-rootfs\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788284 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/282fed6d-4a28-4498-add6-0240e6414dc4-proxy-tls\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788434 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-cni-binary-copy\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.788454 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-conf-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.821330 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892277 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892345 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j757h\" (UniqueName: \"kubernetes.io/projected/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-kube-api-access-j757h\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892379 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdq66\" (UniqueName: \"kubernetes.io/projected/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-kube-api-access-bdq66\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892400 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892422 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-socket-dir-parent\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892461 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-etc-kubernetes\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892479 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-cni-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892497 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-cnibin\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892523 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cni-binary-copy\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892548 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-kubelet\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892566 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cnibin\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892583 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-netns\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892602 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-daemon-config\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892620 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/282fed6d-4a28-4498-add6-0240e6414dc4-proxy-tls\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892636 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-k8s-cni-cncf-io\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892661 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/282fed6d-4a28-4498-add6-0240e6414dc4-rootfs\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892692 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-cni-binary-copy\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892710 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-conf-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892727 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/282fed6d-4a28-4498-add6-0240e6414dc4-mcd-auth-proxy-config\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892742 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-os-release\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892757 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-cni-multus\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892776 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-multus-certs\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892792 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-cni-bin\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892811 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-system-cni-dir\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86jc5\" (UniqueName: \"kubernetes.io/projected/282fed6d-4a28-4498-add6-0240e6414dc4-kube-api-access-86jc5\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892864 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-os-release\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892894 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-system-cni-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.892923 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-hostroot\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.893012 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-hostroot\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895100 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-cni-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895100 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895232 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-socket-dir-parent\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895271 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-etc-kubernetes\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895319 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-kubelet\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895366 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-cnibin\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895589 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-netns\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895628 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cnibin\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895646 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-cni-multus\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.895699 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-conf-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896026 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cni-binary-copy\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896152 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-os-release\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896401 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-cni-binary-copy\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896437 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/282fed6d-4a28-4498-add6-0240e6414dc4-mcd-auth-proxy-config\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896454 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/282fed6d-4a28-4498-add6-0240e6414dc4-rootfs\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896492 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-system-cni-dir\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896524 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-multus-certs\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-var-lib-cni-bin\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896602 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-os-release\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.896985 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-system-cni-dir\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.900037 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-multus-daemon-config\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.901155 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/282fed6d-4a28-4498-add6-0240e6414dc4-proxy-tls\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.904889 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.907919 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-host-run-k8s-cni-cncf-io\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.932493 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.937355 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86jc5\" (UniqueName: \"kubernetes.io/projected/282fed6d-4a28-4498-add6-0240e6414dc4-kube-api-access-86jc5\") pod \"machine-config-daemon-nkhx2\" (UID: \"282fed6d-4a28-4498-add6-0240e6414dc4\") " pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.937676 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j757h\" (UniqueName: \"kubernetes.io/projected/8eabdd33-ae30-4252-8c4e-d016bcfe53fa-kube-api-access-j757h\") pod \"multus-additional-cni-plugins-6nsp2\" (UID: \"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\") " pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.939510 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdq66\" (UniqueName: \"kubernetes.io/projected/39fdea6e-e9b8-4fb4-9375-aaf302a204d3-kube-api-access-bdq66\") pod \"multus-gf4sc\" (UID: \"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\") " pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.945734 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.965030 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.980930 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.983169 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.990005 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gf4sc" Jan 23 18:07:09 crc kubenswrapper[4688]: I0123 18:07:09.996882 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" Jan 23 18:07:10 crc kubenswrapper[4688]: W0123 18:07:10.000576 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod282fed6d_4a28_4498_add6_0240e6414dc4.slice/crio-45d62b090feb0ebfbaaf592b4a3095d0e994f12ef1c4c62dc3abb659649c6606 WatchSource:0}: Error finding container 45d62b090feb0ebfbaaf592b4a3095d0e994f12ef1c4c62dc3abb659649c6606: Status 404 returned error can't find the container with id 45d62b090feb0ebfbaaf592b4a3095d0e994f12ef1c4c62dc3abb659649c6606 Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.009215 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 18:07:10 crc kubenswrapper[4688]: W0123 18:07:10.023567 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eabdd33_ae30_4252_8c4e_d016bcfe53fa.slice/crio-402e436da53c7c16f6198f060137771044bbe7775beff23aed54364b77eb242b WatchSource:0}: Error finding container 402e436da53c7c16f6198f060137771044bbe7775beff23aed54364b77eb242b: Status 404 returned error can't find the container with id 402e436da53c7c16f6198f060137771044bbe7775beff23aed54364b77eb242b Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.224357 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.295006 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:14:55.95776209 +0000 UTC Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.303525 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsqbq"] Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.304320 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.325366 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.325716 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.325888 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.326095 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.326238 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.326442 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.328910 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.335938 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.348531 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.367865 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.394896 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404252 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-netns\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-script-lib\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404323 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-node-log\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404343 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-log-socket\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404468 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-slash\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404548 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-env-overrides\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404613 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-config\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404658 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-systemd-units\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404678 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-var-lib-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404708 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-ovn-kubernetes\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404731 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-bin\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404759 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-kubelet\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/336645d6-da82-4dba-9436-4196367fb547-ovn-node-metrics-cert\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404844 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-ovn\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-netd\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sgmr\" (UniqueName: \"kubernetes.io/projected/336645d6-da82-4dba-9436-4196367fb547-kube-api-access-5sgmr\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.404957 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-etc-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.405023 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-systemd\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.405045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.413492 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.427547 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.447206 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.463867 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.484455 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.498752 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: E0123 18:07:10.500828 4688 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506306 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/336645d6-da82-4dba-9436-4196367fb547-ovn-node-metrics-cert\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506379 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-ovn\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506400 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-netd\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506416 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sgmr\" (UniqueName: \"kubernetes.io/projected/336645d6-da82-4dba-9436-4196367fb547-kube-api-access-5sgmr\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506432 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506459 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-etc-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506479 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-systemd\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506493 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506510 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-netns\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506526 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-script-lib\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506542 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-node-log\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506556 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-log-socket\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-slash\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506593 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-env-overrides\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506607 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-config\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506630 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-systemd-units\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506645 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-var-lib-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506664 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-ovn-kubernetes\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506681 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-bin\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506696 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-kubelet\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.506772 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-kubelet\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-script-lib\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508456 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-ovn\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508488 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-netd\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508782 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508822 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-etc-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508844 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-systemd\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508865 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.508887 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-netns\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509314 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-log-socket\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509384 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-systemd-units\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509391 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-node-log\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509418 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-ovn-kubernetes\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-bin\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509449 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-var-lib-openvswitch\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509456 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-slash\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509503 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-config\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.509873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-env-overrides\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.511955 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/336645d6-da82-4dba-9436-4196367fb547-ovn-node-metrics-cert\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.533217 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sgmr\" (UniqueName: \"kubernetes.io/projected/336645d6-da82-4dba-9436-4196367fb547-kube-api-access-5sgmr\") pod \"ovnkube-node-zsqbq\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.567229 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.573949 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.590862 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.598029 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerStarted","Data":"1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e"} Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.598333 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerStarted","Data":"402e436da53c7c16f6198f060137771044bbe7775beff23aed54364b77eb242b"} Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.599738 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117"} Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.599801 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490"} Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.599812 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"45d62b090feb0ebfbaaf592b4a3095d0e994f12ef1c4c62dc3abb659649c6606"} Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.601559 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gf4sc" event={"ID":"39fdea6e-e9b8-4fb4-9375-aaf302a204d3","Type":"ContainerStarted","Data":"18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890"} Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.601597 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gf4sc" event={"ID":"39fdea6e-e9b8-4fb4-9375-aaf302a204d3","Type":"ContainerStarted","Data":"b337a6712f5f04511cd7f7269b6c5bf6b59bfb661337f7c08c623023a15407de"} Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.602429 4688 scope.go:117] "RemoveContainer" containerID="5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe" Jan 23 18:07:10 crc kubenswrapper[4688]: E0123 18:07:10.602685 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.613683 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.627460 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.632723 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:10 crc kubenswrapper[4688]: W0123 18:07:10.644282 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336645d6_da82_4dba_9436_4196367fb547.slice/crio-cca7942a8d8f4a15b1ab719bca31e57a69c36c06aa41c8028311ce9d5e0d1b6f WatchSource:0}: Error finding container cca7942a8d8f4a15b1ab719bca31e57a69c36c06aa41c8028311ce9d5e0d1b6f: Status 404 returned error can't find the container with id cca7942a8d8f4a15b1ab719bca31e57a69c36c06aa41c8028311ce9d5e0d1b6f Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.644620 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.650398 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.661632 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.676877 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.691545 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.705331 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.721583 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.734226 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.758312 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.774846 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.785177 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 18:07:10 crc kubenswrapper[4688]: E0123 18:07:10.791672 4688 projected.go:194] Error preparing data for projected volume kube-api-access-8vw27 for pod openshift-dns/node-resolver-fw8bl: failed to sync configmap cache: timed out waiting for the condition Jan 23 18:07:10 crc kubenswrapper[4688]: E0123 18:07:10.791980 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/988366e9-b0b9-4785-ad68-185a42d66bc8-kube-api-access-8vw27 podName:988366e9-b0b9-4785-ad68-185a42d66bc8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:11.2919545 +0000 UTC m=+26.287778941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8vw27" (UniqueName: "kubernetes.io/projected/988366e9-b0b9-4785-ad68-185a42d66bc8-kube-api-access-8vw27") pod "node-resolver-fw8bl" (UID: "988366e9-b0b9-4785-ad68-185a42d66bc8") : failed to sync configmap cache: timed out waiting for the condition Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.797592 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.853296 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.916779 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:10 crc kubenswrapper[4688]: I0123 18:07:10.944887 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:10Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.018509 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.018849 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:07:15.01883196 +0000 UTC m=+30.014656391 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.122872 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.123167 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.123291 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.123409 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123093 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123623 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123330 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123756 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123773 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123388 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123836 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123843 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123716 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:15.123698281 +0000 UTC m=+30.119522722 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123888 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:15.123865565 +0000 UTC m=+30.119690006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.123902 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:15.123897306 +0000 UTC m=+30.119721747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.124155 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:15.124129552 +0000 UTC m=+30.119954013 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.295573 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:32:04.679640029 +0000 UTC Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.325656 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vw27\" (UniqueName: \"kubernetes.io/projected/988366e9-b0b9-4785-ad68-185a42d66bc8-kube-api-access-8vw27\") pod \"node-resolver-fw8bl\" (UID: \"988366e9-b0b9-4785-ad68-185a42d66bc8\") " pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.333726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vw27\" (UniqueName: \"kubernetes.io/projected/988366e9-b0b9-4785-ad68-185a42d66bc8-kube-api-access-8vw27\") pod \"node-resolver-fw8bl\" (UID: \"988366e9-b0b9-4785-ad68-185a42d66bc8\") " pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.355665 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.355673 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.355795 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.355672 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.355881 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:11 crc kubenswrapper[4688]: E0123 18:07:11.355949 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.363600 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fw8bl" Jan 23 18:07:11 crc kubenswrapper[4688]: W0123 18:07:11.423697 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988366e9_b0b9_4785_ad68_185a42d66bc8.slice/crio-95aaa598bff1561e33e4e52e12d5e8231f6566d385eb5c497fae133ac8d6b638 WatchSource:0}: Error finding container 95aaa598bff1561e33e4e52e12d5e8231f6566d385eb5c497fae133ac8d6b638: Status 404 returned error can't find the container with id 95aaa598bff1561e33e4e52e12d5e8231f6566d385eb5c497fae133ac8d6b638 Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.525936 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.530440 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.536984 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.541539 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.560020 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.604442 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fw8bl" event={"ID":"988366e9-b0b9-4785-ad68-185a42d66bc8","Type":"ContainerStarted","Data":"95aaa598bff1561e33e4e52e12d5e8231f6566d385eb5c497fae133ac8d6b638"} Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.605975 4688 generic.go:334] "Generic (PLEG): container finished" podID="8eabdd33-ae30-4252-8c4e-d016bcfe53fa" containerID="1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e" exitCode=0 Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.606034 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerDied","Data":"1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e"} Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.609258 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf" exitCode=0 Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.609797 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.609824 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"cca7942a8d8f4a15b1ab719bca31e57a69c36c06aa41c8028311ce9d5e0d1b6f"} Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.622321 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.644735 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.671770 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.691040 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.708660 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.730910 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.753807 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.793666 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.813584 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.824721 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.847741 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.956740 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.970221 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:11 crc kubenswrapper[4688]: I0123 18:07:11.994727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:11Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.006444 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.028712 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.043767 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.058769 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.072320 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.087727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.173143 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.187157 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.232450 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.296221 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:51:12.072985038 +0000 UTC Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.621348 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerStarted","Data":"63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758"} Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.624017 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.624052 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.624061 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.625405 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fw8bl" event={"ID":"988366e9-b0b9-4785-ad68-185a42d66bc8","Type":"ContainerStarted","Data":"0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a"} Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.636095 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.655432 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.667779 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.683837 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.702882 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.715576 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.734770 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.753516 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.768287 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.860993 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.872660 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.895709 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.914128 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.930053 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.939940 4688 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.947400 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.969290 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:12 crc kubenswrapper[4688]: I0123 18:07:12.982487 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:12.997210 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:12Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.099808 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.099855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.099867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.099986 4688 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.176561 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.177078 4688 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.177360 4688 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.178648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.178669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.178677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.178690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.178700 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.279638 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.296755 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:43:35.142698953 +0000 UTC Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.313426 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pnr5l"] Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.314123 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.316380 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.317009 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.317426 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.324578 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/eb2218fb-8676-431e-b257-a3c9388095b8-serviceca\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.324666 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72qpt\" (UniqueName: \"kubernetes.io/projected/eb2218fb-8676-431e-b257-a3c9388095b8-kube-api-access-72qpt\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.324700 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eb2218fb-8676-431e-b257-a3c9388095b8-host\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.336282 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.343241 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.354881 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.355436 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.355456 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.355553 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.355649 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.355745 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.355918 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.367551 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.370540 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.370579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.370589 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.370605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.370615 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.413378 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.415892 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.420639 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.420670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.420680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.420699 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.420711 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.435231 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.437386 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.439152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.439177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.439208 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.439227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.439239 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.452562 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.456648 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.457012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.457037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.457046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.457064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.457075 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.460017 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72qpt\" (UniqueName: \"kubernetes.io/projected/eb2218fb-8676-431e-b257-a3c9388095b8-kube-api-access-72qpt\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.460059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eb2218fb-8676-431e-b257-a3c9388095b8-host\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.460103 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/eb2218fb-8676-431e-b257-a3c9388095b8-serviceca\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.462336 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/eb2218fb-8676-431e-b257-a3c9388095b8-serviceca\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.462452 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/eb2218fb-8676-431e-b257-a3c9388095b8-host\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.471910 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: E0123 18:07:13.472096 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.474061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.474099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.474109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.474129 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.474140 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.475836 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.486091 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72qpt\" (UniqueName: \"kubernetes.io/projected/eb2218fb-8676-431e-b257-a3c9388095b8-kube-api-access-72qpt\") pod \"node-ca-pnr5l\" (UID: \"eb2218fb-8676-431e-b257-a3c9388095b8\") " pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.492717 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.514232 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.526533 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.539803 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.549339 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.568925 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.577139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.577199 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.577212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.577228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.577238 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.591691 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.603902 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.647928 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.659402 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.659669 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.659743 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.670234 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.680121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.680350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.680449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.680542 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.680628 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.700790 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.715373 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.727847 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.785162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.785246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.785261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.785281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.785293 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.785981 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:13Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.787760 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pnr5l" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.887712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.888007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.888018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.888031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:13 crc kubenswrapper[4688]: I0123 18:07:13.888040 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:13Z","lastTransitionTime":"2026-01-23T18:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.022104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.022135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.022143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.022157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.022166 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.124167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.124263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.124273 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.124289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.124298 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.227582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.227629 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.227639 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.227656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.227667 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.298595 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:49:44.787519905 +0000 UTC Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.330407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.330458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.330473 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.330490 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.330501 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.435139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.435175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.435201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.435219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.435231 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.537488 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.537531 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.537542 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.537559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.537572 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.639759 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.639804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.639814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.639828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.639838 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.663332 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.665384 4688 generic.go:334] "Generic (PLEG): container finished" podID="8eabdd33-ae30-4252-8c4e-d016bcfe53fa" containerID="63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758" exitCode=0 Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.665512 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerDied","Data":"63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.667607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pnr5l" event={"ID":"eb2218fb-8676-431e-b257-a3c9388095b8","Type":"ContainerStarted","Data":"ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.667644 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pnr5l" event={"ID":"eb2218fb-8676-431e-b257-a3c9388095b8","Type":"ContainerStarted","Data":"1237c2c84e4ccfc440056115fb8f0ba84741e2caddcd6b21b7bbd1546a30bef4"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.678783 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.702893 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.721745 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.737413 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.746032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.746068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.746078 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.746094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.746105 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.748417 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.764146 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.860724 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.864957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.864984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.864994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.865008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.865018 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.888175 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.907995 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.926045 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.939238 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.957976 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.969486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.969537 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.969551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.969578 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.969595 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:14Z","lastTransitionTime":"2026-01-23T18:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.981632 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:14 crc kubenswrapper[4688]: I0123 18:07:14.997798 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:14Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.016375 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.031032 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.047134 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.059898 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.072317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.072361 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.072371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.072392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.072403 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.075028 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.076252 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.076395 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:07:23.076369783 +0000 UTC m=+38.072194264 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.088123 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.099667 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.110400 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.122608 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.135443 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.144106 4688 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.145628 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/node-ca-pnr5l/status\": read tcp 38.129.56.213:34906->38.129.56.213:6443: use of closed network connection" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.173935 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.175682 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.175810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.175873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.175942 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.176002 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.177088 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.177141 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.177164 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.177223 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177352 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177374 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177385 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177430 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:23.177412568 +0000 UTC m=+38.173237009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177691 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177706 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177714 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177736 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:23.177729477 +0000 UTC m=+38.173553918 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177778 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177799 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:23.177792489 +0000 UTC m=+38.173616930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177842 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.177860 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:23.17785515 +0000 UTC m=+38.173679591 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.188429 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.207473 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.278876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.278917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.278928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.278947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.278960 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.300092 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:26:42.863728397 +0000 UTC Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.355774 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.355780 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.355910 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.355795 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.356075 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:15 crc kubenswrapper[4688]: E0123 18:07:15.356140 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.366715 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.381139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.381172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.381196 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.381211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.381222 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.388990 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.404345 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.415698 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.430642 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.445226 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.458824 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.474925 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.484178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.484275 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.484290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.484306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.484318 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.489161 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.502860 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.517258 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.532285 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.544080 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.563277 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.586007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.586038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.586130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.586146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.586154 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.672010 4688 generic.go:334] "Generic (PLEG): container finished" podID="8eabdd33-ae30-4252-8c4e-d016bcfe53fa" containerID="204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2" exitCode=0 Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.672118 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerDied","Data":"204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.688356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.688394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.688410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.688430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.688445 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.700720 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.721492 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.742935 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.765467 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.781430 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.790648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.790688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.790699 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.790718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.790731 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.795065 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.809119 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.821406 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.841520 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.858587 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.874485 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.887700 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.893367 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.893394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.893406 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.893420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.893428 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.897921 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.915470 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:15Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.996563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.996634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.996646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.996667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:15 crc kubenswrapper[4688]: I0123 18:07:15.996680 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:15Z","lastTransitionTime":"2026-01-23T18:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.099241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.099319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.099347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.099378 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.099396 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.202416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.202501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.202524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.202553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.202582 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.301295 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:37:57.558988113 +0000 UTC Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.305507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.305557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.305572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.305596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.305612 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.409547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.409871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.409885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.409905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.409916 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.512536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.512594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.512618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.512642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.512660 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.615793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.615853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.615864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.615885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.615897 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.680149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerStarted","Data":"a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.686778 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.703583 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.719321 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.719391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.719403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.719421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.719434 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.720150 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.736022 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.750507 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.766402 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.780927 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.810151 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.821955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.821991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.822002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.822017 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.822027 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.825357 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.841220 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.856235 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.869838 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.890599 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.905813 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.922544 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:16Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.925635 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.925698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.925715 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.925735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:16 crc kubenswrapper[4688]: I0123 18:07:16.925749 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:16Z","lastTransitionTime":"2026-01-23T18:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.027917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.027950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.027958 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.027970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.027980 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.131014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.131068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.131083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.131104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.131119 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.234147 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.234238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.234249 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.234271 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.234283 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.302523 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:42:43.650374725 +0000 UTC Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.337515 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.337549 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.337559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.337575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.337586 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.356141 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:17 crc kubenswrapper[4688]: E0123 18:07:17.356530 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.356248 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:17 crc kubenswrapper[4688]: E0123 18:07:17.356793 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.356153 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:17 crc kubenswrapper[4688]: E0123 18:07:17.357002 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.440480 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.440742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.440868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.441166 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.441290 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.544320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.544355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.544366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.544382 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.544396 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.646781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.646820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.646833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.646852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.646865 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.692139 4688 generic.go:334] "Generic (PLEG): container finished" podID="8eabdd33-ae30-4252-8c4e-d016bcfe53fa" containerID="a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a" exitCode=0 Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.692175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerDied","Data":"a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.710347 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.728586 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.746049 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.750304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.750358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.750369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.750383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.750394 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.761255 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.772237 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.785460 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.797038 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.810274 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.824891 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.839556 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.850754 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.852804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.852831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.852839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.852852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.852866 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.865996 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.881225 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.899521 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:17Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.955512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.955559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.955568 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.955582 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:17 crc kubenswrapper[4688]: I0123 18:07:17.955595 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:17Z","lastTransitionTime":"2026-01-23T18:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.061340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.061381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.061391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.061405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.061419 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.164804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.164842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.164852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.164867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.164878 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.267257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.267292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.267304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.267319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.267331 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.303664 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:39:19.726071139 +0000 UTC Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.370299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.370347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.370362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.370383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.370397 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.472740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.472793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.472811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.472836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.472853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.575955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.575997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.576035 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.576075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.576086 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.677998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.678033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.678041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.678053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.678063 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.697259 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerStarted","Data":"2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.704422 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.706496 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.706656 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.717727 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.730713 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.732622 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.733238 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.751275 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.762511 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.778725 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.785586 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.785659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.785676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.785694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.785706 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.796106 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.809410 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.826508 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.841051 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.852588 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.862562 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.878128 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.888896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.888932 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.888944 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.888959 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.888969 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.895433 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.918706 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.934938 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.949905 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.960316 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.975044 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.989323 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:18Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.990806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.990854 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.990866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.990884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:18 crc kubenswrapper[4688]: I0123 18:07:18.990897 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:18Z","lastTransitionTime":"2026-01-23T18:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.002041 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.014217 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.024640 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.034824 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.048616 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.059866 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.070203 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.087512 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.099460 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.099501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.099512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.099527 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.099541 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.108011 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:19Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.202000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.202237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.202252 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.202266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.202305 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.303877 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:03:03.555160221 +0000 UTC Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.305146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.305231 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.305241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.305256 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.305266 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.356318 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.356319 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:19 crc kubenswrapper[4688]: E0123 18:07:19.356553 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.356574 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:19 crc kubenswrapper[4688]: E0123 18:07:19.356668 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:19 crc kubenswrapper[4688]: E0123 18:07:19.356772 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.408062 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.408114 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.408125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.408143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.408160 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.511096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.511149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.511212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.511232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.511259 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.613858 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.613884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.613907 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.613920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.613929 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.708154 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.716366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.716422 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.716434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.716457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.716469 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.819316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.819389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.819408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.819430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.819470 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.922446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.922484 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.922516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.922537 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:19 crc kubenswrapper[4688]: I0123 18:07:19.922550 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:19Z","lastTransitionTime":"2026-01-23T18:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.024689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.024740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.024751 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.024766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.024777 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.126989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.127034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.127044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.127061 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.127079 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.230174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.230238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.230250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.230265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.230297 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.304232 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:38:45.741773342 +0000 UTC Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.333048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.333119 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.333343 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.333417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.333434 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.435875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.435903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.435940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.435955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.435963 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.539021 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.539075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.539086 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.539102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.539116 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.641510 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.641548 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.641558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.641575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.641588 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.719671 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.743349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.743385 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.743394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.743409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.743420 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.878750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.878813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.878824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.878841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.879197 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.981472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.981512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.981525 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.981540 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:20 crc kubenswrapper[4688]: I0123 18:07:20.981548 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:20Z","lastTransitionTime":"2026-01-23T18:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.083337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.083383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.083394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.083409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.083419 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.185732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.185773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.185784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.185800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.185812 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.288725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.288753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.288761 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.288773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.288782 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.305417 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:00:41.611033389 +0000 UTC Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.355630 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.355659 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.355630 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:21 crc kubenswrapper[4688]: E0123 18:07:21.355761 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:21 crc kubenswrapper[4688]: E0123 18:07:21.355821 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:21 crc kubenswrapper[4688]: E0123 18:07:21.355886 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.391474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.391525 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.391559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.391579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.391593 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.493849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.494074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.494101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.494118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.494128 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.597122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.597160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.597179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.597218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.597231 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.700440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.700527 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.700539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.700560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.700576 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.726039 4688 generic.go:334] "Generic (PLEG): container finished" podID="8eabdd33-ae30-4252-8c4e-d016bcfe53fa" containerID="2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021" exitCode=0 Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.726111 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerDied","Data":"2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.746053 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.769036 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.784918 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.802562 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.803852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.803912 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.803936 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.803968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.803994 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.820319 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.838303 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.951745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.951793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.951807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.951825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.951839 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:21Z","lastTransitionTime":"2026-01-23T18:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.958425 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.972451 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:21 crc kubenswrapper[4688]: I0123 18:07:21.993458 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:21Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.007588 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.024966 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.040532 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.059318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.059355 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.059368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.059385 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.059397 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.059835 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.073264 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:22Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.161962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.162004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.162016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.162033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.162045 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.264769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.264811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.264820 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.264834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.264844 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.306257 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:51:15.725694791 +0000 UTC Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.367980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.368060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.368089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.368119 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.368142 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.472396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.472434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.472443 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.472457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.472467 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.575358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.575405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.575425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.575444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.575456 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.681764 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.681843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.681864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.681898 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.681924 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.784962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.785014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.785031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.785047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.785060 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.888213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.888284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.888297 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.888317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.888329 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.990397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.990431 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.990439 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.990454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:22 crc kubenswrapper[4688]: I0123 18:07:22.990465 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:22Z","lastTransitionTime":"2026-01-23T18:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.098278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.098354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.098369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.098389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.098403 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.167484 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.167732 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:07:39.167715199 +0000 UTC m=+54.163539640 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.209748 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.209776 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.209784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.209797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.209807 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.268569 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.268617 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.268638 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.268661 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.268767 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.268813 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:39.268799546 +0000 UTC m=+54.264623987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269113 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269219 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269345 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269456 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:39.269444853 +0000 UTC m=+54.265269294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269237 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269601 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:39.269593307 +0000 UTC m=+54.265417748 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269143 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269741 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269762 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.269840 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:07:39.269816353 +0000 UTC m=+54.265640794 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.306948 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:39:52.727334251 +0000 UTC Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.312349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.312443 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.312470 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.312505 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.312528 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.357575 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.357695 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.357772 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.357838 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.357877 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.357930 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.416066 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.416111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.416121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.416136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.416145 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.444862 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4"] Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.445354 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.447798 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.449662 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.464710 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.470841 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.470878 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7784\" (UniqueName: \"kubernetes.io/projected/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-kube-api-access-b7784\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.470938 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.470966 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.486995 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.504651 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.515808 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.518168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.518364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.518465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.518559 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.518663 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.533551 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.546044 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.559550 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.572153 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.572278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.572300 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7784\" (UniqueName: \"kubernetes.io/projected/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-kube-api-access-b7784\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.572332 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.572524 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.572876 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.573042 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.578216 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.591493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7784\" (UniqueName: \"kubernetes.io/projected/19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26-kube-api-access-b7784\") pod \"ovnkube-control-plane-749d76644c-2s8n4\" (UID: \"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.591968 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.606001 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.618639 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.621436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.621466 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.621473 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.621489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.621500 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.632080 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.644002 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.657710 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.667863 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.723733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.723775 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.723786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.723804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.723816 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.735121 4688 generic.go:334] "Generic (PLEG): container finished" podID="8eabdd33-ae30-4252-8c4e-d016bcfe53fa" containerID="640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb" exitCode=0 Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.735165 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerDied","Data":"640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.757573 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.766176 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.784627 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.796030 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.806628 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.824674 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.829894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.829935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.829946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.829965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.829979 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.842126 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.865854 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.865882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.865891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.865904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.865914 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.941533 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.953563 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.958553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.958598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.958654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.958675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.958687 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.960065 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.975862 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.979038 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.980954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.980989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.980999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.981016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.981027 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:23Z","lastTransitionTime":"2026-01-23T18:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:23 crc kubenswrapper[4688]: E0123 18:07:23.995041 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:23 crc kubenswrapper[4688]: I0123 18:07:23.997658 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:23Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.000744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.000769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.000777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.000790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.000800 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: E0123 18:07:24.021265 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.021591 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.026905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.026969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.026981 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.026995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.027004 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.036948 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: E0123 18:07:24.040753 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: E0123 18:07:24.040906 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.042538 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.042575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.042594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.042609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.042619 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.053513 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.072575 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.086916 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.145683 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.145782 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.145798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.145825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.145845 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.248657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.248702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.248711 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.248726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.248737 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.307952 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:15:04.736824478 +0000 UTC Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.352970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.353048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.353064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.353094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.353112 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.455531 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.455576 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.455585 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.455601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.455612 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.558674 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.558729 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.558743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.558763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.558776 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.611008 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kr87l"] Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.611751 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:24 crc kubenswrapper[4688]: E0123 18:07:24.611846 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.632384 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.642709 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.642766 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9sc\" (UniqueName: \"kubernetes.io/projected/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-kube-api-access-vp9sc\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.647279 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.662083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.662135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.662148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.662169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.662204 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.663389 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.684175 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.709960 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.726749 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.742381 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/0.log" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.743688 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9sc\" (UniqueName: \"kubernetes.io/projected/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-kube-api-access-vp9sc\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.743814 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:24 crc kubenswrapper[4688]: E0123 18:07:24.743923 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:24 crc kubenswrapper[4688]: E0123 18:07:24.743974 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs podName:44e9c4ca-39a2-42f8-aac2-eca60087c3ed nodeName:}" failed. No retries permitted until 2026-01-23 18:07:25.243958973 +0000 UTC m=+40.239783414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs") pod "network-metrics-daemon-kr87l" (UID: "44e9c4ca-39a2-42f8-aac2-eca60087c3ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.746868 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.747472 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd" exitCode=1 Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.747544 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.748302 4688 scope.go:117] "RemoveContainer" containerID="804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.755256 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" event={"ID":"8eabdd33-ae30-4252-8c4e-d016bcfe53fa","Type":"ContainerStarted","Data":"9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.757256 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" event={"ID":"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26","Type":"ContainerStarted","Data":"2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.757312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" event={"ID":"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26","Type":"ContainerStarted","Data":"120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.757338 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" event={"ID":"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26","Type":"ContainerStarted","Data":"3b251570263fc4f833b1f33127433ed857d33afeaa7781b59b6fbbba0d91cb02"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.763952 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.763988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.763997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.764012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.764024 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.764135 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.769698 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9sc\" (UniqueName: \"kubernetes.io/projected/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-kube-api-access-vp9sc\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.778114 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.796986 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.807048 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.819824 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.832510 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.848765 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.861150 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.866150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.866177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.866197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.866224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.866233 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.878226 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.892828 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.905870 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.930529 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"l\\\\nI0123 18:07:24.430450 5897 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:24.430552 5897 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:07:24.431318 5897 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:07:24.430901 5897 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:24.431387 5897 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:07:24.431478 5897 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:24.431488 5897 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 18:07:24.431540 5897 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:07:24.431582 5897 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:24.431621 5897 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:24.431636 5897 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 18:07:24.431654 5897 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:24.431645 5897 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:24.431703 5897 factory.go:656] Stopping watch factory\\\\nI0123 18:07:24.431741 5897 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:24.431709 5897 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.943256 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.960706 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.967961 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.967982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.967989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.968002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.968010 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:24Z","lastTransitionTime":"2026-01-23T18:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.973148 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.986033 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:24 crc kubenswrapper[4688]: I0123 18:07:24.997329 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:24Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.015339 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.030618 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.044973 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.058759 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.070776 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.070822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.070832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.070848 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.070861 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:25Z","lastTransitionTime":"2026-01-23T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.072337 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.084033 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.092864 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.101637 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.460773 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:25 crc kubenswrapper[4688]: E0123 18:07:25.460965 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:25 crc kubenswrapper[4688]: E0123 18:07:25.461017 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs podName:44e9c4ca-39a2-42f8-aac2-eca60087c3ed nodeName:}" failed. No retries permitted until 2026-01-23 18:07:26.46100146 +0000 UTC m=+41.456825911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs") pod "network-metrics-daemon-kr87l" (UID: "44e9c4ca-39a2-42f8-aac2-eca60087c3ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.461577 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:27:20.401256456 +0000 UTC Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.461656 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:25 crc kubenswrapper[4688]: E0123 18:07:25.461893 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.462002 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:25 crc kubenswrapper[4688]: E0123 18:07:25.462983 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.463012 4688 scope.go:117] "RemoveContainer" containerID="5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.461452 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:25 crc kubenswrapper[4688]: E0123 18:07:25.479453 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.482374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.482424 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.482443 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.482467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.482486 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:25Z","lastTransitionTime":"2026-01-23T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.497257 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.514437 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.530474 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.548153 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.561535 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.576206 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.584983 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.585029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.585042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.585060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.585089 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:25Z","lastTransitionTime":"2026-01-23T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.587568 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.607387 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.620805 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.642065 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"l\\\\nI0123 18:07:24.430450 5897 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:24.430552 5897 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:07:24.431318 5897 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:07:24.430901 5897 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:24.431387 5897 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:07:24.431478 5897 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:24.431488 5897 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 18:07:24.431540 5897 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:07:24.431582 5897 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:24.431621 5897 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:24.431636 5897 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 18:07:24.431654 5897 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:24.431645 5897 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:24.431703 5897 factory.go:656] Stopping watch factory\\\\nI0123 18:07:24.431741 5897 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:24.431709 5897 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.661725 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.683723 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.688735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.688790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.688804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.688827 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.688841 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:25Z","lastTransitionTime":"2026-01-23T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.706417 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.722588 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.735087 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.750986 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.763565 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/0.log" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.766496 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3"} Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.766653 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.783118 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.792481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.792872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.792969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.793197 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.793301 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:25Z","lastTransitionTime":"2026-01-23T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.799206 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.821082 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.837408 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.852016 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.864778 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.879014 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.892251 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.895915 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.896092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.896173 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.896299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.896389 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:25Z","lastTransitionTime":"2026-01-23T18:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.904564 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.922678 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"l\\\\nI0123 18:07:24.430450 5897 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:24.430552 5897 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:07:24.431318 5897 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:07:24.430901 5897 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:24.431387 5897 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:07:24.431478 5897 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:24.431488 5897 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 18:07:24.431540 5897 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:07:24.431582 5897 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:24.431621 5897 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:24.431636 5897 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 18:07:24.431654 5897 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:24.431645 5897 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:24.431703 5897 factory.go:656] Stopping watch factory\\\\nI0123 18:07:24.431741 5897 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:24.431709 5897 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.936906 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.952558 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.967094 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:25Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.999364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.999774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:25 crc kubenswrapper[4688]: I0123 18:07:25.999877 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.000089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.000306 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.085642 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.097088 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.106648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.106694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.106707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.106723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.106734 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.112697 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.208789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.208817 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.208825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.208837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.208845 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.312808 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.312870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.312887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.312916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.312933 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.355349 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:26 crc kubenswrapper[4688]: E0123 18:07:26.355550 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.415618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.415666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.415678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.415707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.415721 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.462060 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:37:35.662200167 +0000 UTC Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.470649 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:26 crc kubenswrapper[4688]: E0123 18:07:26.470813 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:26 crc kubenswrapper[4688]: E0123 18:07:26.470876 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs podName:44e9c4ca-39a2-42f8-aac2-eca60087c3ed nodeName:}" failed. No retries permitted until 2026-01-23 18:07:28.470861832 +0000 UTC m=+43.466686273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs") pod "network-metrics-daemon-kr87l" (UID: "44e9c4ca-39a2-42f8-aac2-eca60087c3ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.518824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.518853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.518861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.518874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.518883 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.620826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.620861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.620875 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.620892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.620905 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.723047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.723085 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.723098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.723112 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.723123 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.771905 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.774046 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.775136 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.777484 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/1.log" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.778270 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/0.log" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.787594 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.787646 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3" exitCode=1 Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.787695 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.787739 4688 scope.go:117] "RemoveContainer" containerID="804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.788502 4688 scope.go:117] "RemoveContainer" containerID="f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3" Jan 23 18:07:26 crc kubenswrapper[4688]: E0123 18:07:26.788699 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.815613 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.825477 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.825511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.825520 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.825536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.825546 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.830807 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.848648 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.881304 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"l\\\\nI0123 18:07:24.430450 5897 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:24.430552 5897 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:07:24.431318 5897 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:07:24.430901 5897 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:24.431387 5897 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:07:24.431478 5897 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:24.431488 5897 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 18:07:24.431540 5897 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:07:24.431582 5897 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:24.431621 5897 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:24.431636 5897 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 18:07:24.431654 5897 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:24.431645 5897 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:24.431703 5897 factory.go:656] Stopping watch factory\\\\nI0123 18:07:24.431741 5897 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:24.431709 5897 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.897907 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.916754 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.928011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.928055 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.928067 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.928089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.928103 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:26Z","lastTransitionTime":"2026-01-23T18:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.938826 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:26 crc kubenswrapper[4688]: I0123 18:07:26.988236 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:26Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.006504 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.029375 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.030669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.030709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.030723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.030745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.030758 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.045578 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.063435 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.077019 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.092051 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.107229 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.125523 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.133679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.133717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.133726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.133738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.133748 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.141626 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.155233 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.168838 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.182024 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.194460 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.206408 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.228073 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://804449c615ed518193ebf10f3b998d71ac9787dfc2aed01af6d0e8a77996f2dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"l\\\\nI0123 18:07:24.430450 5897 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:24.430552 5897 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 18:07:24.431318 5897 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 18:07:24.430901 5897 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:24.431387 5897 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 18:07:24.431478 5897 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:24.431488 5897 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0123 18:07:24.431540 5897 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 18:07:24.431582 5897 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:24.431621 5897 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:24.431636 5897 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 18:07:24.431654 5897 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:24.431645 5897 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:24.431703 5897 factory.go:656] Stopping watch factory\\\\nI0123 18:07:24.431741 5897 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:24.431709 5897 handler.go:208] Removed *v1.Node event handler 7\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:26Z\\\",\\\"message\\\":\\\"ding *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 18:07:26.196493 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:07:26.196498 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:26.196518 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:26.196521 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:26.196522 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 18:07:26.196526 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:26.196545 6155 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:26.196551 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:26.196561 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 18:07:26.196910 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:26.196954 6155 factory.go:656] Stopping watch factory\\\\nI0123 18:07:26.196970 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:26.199221 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:26.199328 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:26.199452 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.236490 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.236536 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.236548 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.236567 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.236580 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.242773 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.255503 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.270687 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.286532 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.298387 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.315985 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.333500 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.338814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.338927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.338939 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.338960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.338973 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.348372 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:27Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.355648 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.355706 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.355772 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:27 crc kubenswrapper[4688]: E0123 18:07:27.355812 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:27 crc kubenswrapper[4688]: E0123 18:07:27.355991 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:27 crc kubenswrapper[4688]: E0123 18:07:27.356069 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.442073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.442138 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.442157 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.442177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.442227 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.462267 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:07:12.86616866 +0000 UTC Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.545511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.545578 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.545600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.545630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.545653 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.648728 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.648791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.648809 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.648832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.648847 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.751539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.751620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.751633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.751654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.751667 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.794242 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/1.log" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.854366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.854435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.854446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.854463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.854473 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.957990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.958057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.958069 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.958092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:27 crc kubenswrapper[4688]: I0123 18:07:27.958106 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:27Z","lastTransitionTime":"2026-01-23T18:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.060628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.060932 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.061043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.061176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.061363 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.164304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.164374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.164395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.164426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.164447 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.268247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.268325 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.268354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.268383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.268401 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.355597 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:28 crc kubenswrapper[4688]: E0123 18:07:28.355753 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.370526 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.370571 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.370581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.370597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.370609 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.463284 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:38:33.88956187 +0000 UTC Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.473493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.473768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.473908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.473998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.474215 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.494532 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:28 crc kubenswrapper[4688]: E0123 18:07:28.494711 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:28 crc kubenswrapper[4688]: E0123 18:07:28.494814 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs podName:44e9c4ca-39a2-42f8-aac2-eca60087c3ed nodeName:}" failed. No retries permitted until 2026-01-23 18:07:32.494795031 +0000 UTC m=+47.490619472 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs") pod "network-metrics-daemon-kr87l" (UID: "44e9c4ca-39a2-42f8-aac2-eca60087c3ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.576354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.576397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.576407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.576420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.576429 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.678511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.678554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.678563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.678576 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.678585 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.781917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.782001 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.782027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.782060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.782083 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.883798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.884228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.884421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.884599 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.884785 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.987879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.987953 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.987977 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.988008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:28 crc kubenswrapper[4688]: I0123 18:07:28.988032 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:28Z","lastTransitionTime":"2026-01-23T18:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.090920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.090992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.091019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.091053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.091081 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.194684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.194743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.194761 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.194786 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.194804 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.297131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.297169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.297178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.297214 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.297229 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.356528 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.356578 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.356578 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:29 crc kubenswrapper[4688]: E0123 18:07:29.356714 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:29 crc kubenswrapper[4688]: E0123 18:07:29.356877 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:29 crc kubenswrapper[4688]: E0123 18:07:29.357085 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.400903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.400973 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.400994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.401019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.401038 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.463659 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:28:13.376842803 +0000 UTC Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.504288 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.504688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.504709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.504736 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.504760 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.607518 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.607578 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.607597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.607616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.607630 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.710517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.710563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.710572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.710592 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.710606 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.813419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.813464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.813475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.813492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.813507 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.916113 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.916176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.916212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.916237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:29 crc kubenswrapper[4688]: I0123 18:07:29.916252 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:29Z","lastTransitionTime":"2026-01-23T18:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.019547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.019616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.019634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.019660 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.019678 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.122368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.122441 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.122451 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.122465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.122475 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.225368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.225448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.225482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.225509 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.225523 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.328937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.328992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.329006 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.329026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.329038 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.356287 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:30 crc kubenswrapper[4688]: E0123 18:07:30.356457 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.432511 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.432561 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.432575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.432594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.432608 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.463905 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:45:28.836724468 +0000 UTC Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.535425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.535471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.535482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.535500 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.535512 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.637664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.637701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.637710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.637726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.637735 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.741424 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.741474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.741486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.741503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.741515 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.843679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.843708 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.843716 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.843727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.843736 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.946041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.946080 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.946089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.946101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:30 crc kubenswrapper[4688]: I0123 18:07:30.946111 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:30Z","lastTransitionTime":"2026-01-23T18:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.048509 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.048539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.048547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.048560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.048569 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.133054 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.134022 4688 scope.go:117] "RemoveContainer" containerID="f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3" Jan 23 18:07:31 crc kubenswrapper[4688]: E0123 18:07:31.134275 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.149065 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.150498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.150519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.150527 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.150541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.150550 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.162309 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.179667 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.190708 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.205912 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.219145 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.233536 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.245318 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.253310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.253371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.253386 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.253402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.253413 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.262008 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.277114 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.292993 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.309216 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.330883 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.347733 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.355411 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:31 crc kubenswrapper[4688]: E0123 18:07:31.355640 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.355816 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.355450 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:31 crc kubenswrapper[4688]: E0123 18:07:31.356099 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:31 crc kubenswrapper[4688]: E0123 18:07:31.356265 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.356300 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.356568 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.356654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.356734 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.356805 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.368595 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:26Z\\\",\\\"message\\\":\\\"ding *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 18:07:26.196493 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:07:26.196498 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:26.196518 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:26.196521 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:26.196522 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 18:07:26.196526 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:26.196545 6155 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:26.196551 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:26.196561 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 18:07:26.196910 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:26.196954 6155 factory.go:656] Stopping watch factory\\\\nI0123 18:07:26.196970 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:26.199221 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:26.199328 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:26.199452 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.381223 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:31Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.459757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.459799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.459808 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.459825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.459836 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.464882 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:12:52.570842925 +0000 UTC Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.562929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.563245 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.563317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.563379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.563515 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.666395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.666688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.666862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.666951 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.667022 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.769839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.769885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.769897 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.769913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.769924 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.872889 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.872932 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.872943 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.872959 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.872971 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.974965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.975243 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.975364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.975496 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:31 crc kubenswrapper[4688]: I0123 18:07:31.975645 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:31Z","lastTransitionTime":"2026-01-23T18:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.078782 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.079248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.079413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.079607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.079686 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.182484 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.182733 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.182808 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.182890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.183001 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.285044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.285088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.285100 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.285116 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.285126 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.355543 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:32 crc kubenswrapper[4688]: E0123 18:07:32.355686 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.387793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.388048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.388136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.388246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.388337 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.465298 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:34:47.187601408 +0000 UTC Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.491258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.491319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.491357 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.491390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.491414 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.534610 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:32 crc kubenswrapper[4688]: E0123 18:07:32.534807 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:32 crc kubenswrapper[4688]: E0123 18:07:32.534947 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs podName:44e9c4ca-39a2-42f8-aac2-eca60087c3ed nodeName:}" failed. No retries permitted until 2026-01-23 18:07:40.534905315 +0000 UTC m=+55.530729796 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs") pod "network-metrics-daemon-kr87l" (UID: "44e9c4ca-39a2-42f8-aac2-eca60087c3ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.593889 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.593947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.593962 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.593984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.594000 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.696616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.696650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.696658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.696671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.696680 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.799929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.799990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.800006 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.800031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.800055 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.903235 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.903281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.903291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.903310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:32 crc kubenswrapper[4688]: I0123 18:07:32.903320 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:32Z","lastTransitionTime":"2026-01-23T18:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.006308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.006698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.006832 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.006969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.007107 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.109747 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.109997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.110079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.110166 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.110323 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.212697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.212723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.212732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.212746 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.212755 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.315649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.315723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.315745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.315767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.315784 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.356471 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.356531 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.356510 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:33 crc kubenswrapper[4688]: E0123 18:07:33.356633 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:33 crc kubenswrapper[4688]: E0123 18:07:33.356757 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:33 crc kubenswrapper[4688]: E0123 18:07:33.356841 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.417907 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.417955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.417966 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.417984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.417995 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.465644 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:23:24.403211623 +0000 UTC Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.520877 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.521170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.521277 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.521354 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.521418 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.623159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.623263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.623283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.623297 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.623306 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.725784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.725852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.725870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.725891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.725907 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.827825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.827870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.827881 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.827896 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.827908 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.930782 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.930834 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.930843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.930859 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:33 crc kubenswrapper[4688]: I0123 18:07:33.930868 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:33Z","lastTransitionTime":"2026-01-23T18:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.032816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.032866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.032874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.032889 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.032898 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.136486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.136565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.136581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.136602 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.136617 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.239725 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.239815 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.239842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.239874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.239897 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.266810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.266856 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.266868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.266885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.266897 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: E0123 18:07:34.283458 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.288464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.288501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.288513 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.288532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.288544 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: E0123 18:07:34.308111 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.313308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.313379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.313403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.313437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.313459 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: E0123 18:07:34.330527 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.334563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.334610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.334621 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.334637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.334653 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: E0123 18:07:34.347553 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.352243 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.352304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.352326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.352348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.352363 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.355822 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:34 crc kubenswrapper[4688]: E0123 18:07:34.356033 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:34 crc kubenswrapper[4688]: E0123 18:07:34.368058 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:34Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:34 crc kubenswrapper[4688]: E0123 18:07:34.368288 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.370160 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.370226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.370239 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.370259 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.370273 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.466036 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:26:49.354009161 +0000 UTC Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.473408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.473450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.473458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.473475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.473485 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.575810 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.575855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.575867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.575898 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.575912 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.678344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.678417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.678436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.678463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.678483 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.781779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.781840 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.781862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.781893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.781914 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.884180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.884248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.884258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.884274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.884283 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.986723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.986772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.986785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.986811 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:34 crc kubenswrapper[4688]: I0123 18:07:34.986826 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:34Z","lastTransitionTime":"2026-01-23T18:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.089436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.089465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.089475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.089491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.089502 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.191905 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.192209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.192222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.192238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.192254 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.294368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.294823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.295052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.295094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.295117 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.355421 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.355439 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.355493 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:35 crc kubenswrapper[4688]: E0123 18:07:35.355557 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:35 crc kubenswrapper[4688]: E0123 18:07:35.355609 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:35 crc kubenswrapper[4688]: E0123 18:07:35.355681 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.370976 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.389602 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:26Z\\\",\\\"message\\\":\\\"ding *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 18:07:26.196493 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:07:26.196498 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:26.196518 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:26.196521 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:26.196522 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 18:07:26.196526 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:26.196545 6155 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:26.196551 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:26.196561 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 18:07:26.196910 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:26.196954 6155 factory.go:656] Stopping watch factory\\\\nI0123 18:07:26.196970 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:26.199221 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:26.199328 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:26.199452 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.397275 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.397330 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.397345 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.397364 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.397376 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.405751 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.418814 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.431098 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.444575 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.456244 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.466158 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:53:53.111951705 +0000 UTC Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.474422 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.487250 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.499528 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.499587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.499595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.499610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.499620 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.500884 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.514072 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.529669 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.541451 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.552630 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.568707 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.581921 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:35Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.601974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.602020 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.602033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.602049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.602061 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.704794 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.704825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.704833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.704845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.704853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.807540 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.807594 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.807606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.807623 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.807636 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.910228 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.910530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.910600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.910669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:35 crc kubenswrapper[4688]: I0123 18:07:35.910937 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:35Z","lastTransitionTime":"2026-01-23T18:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.013933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.013970 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.013979 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.013995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.014004 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.115901 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.115949 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.115960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.115978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.115989 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.219132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.219209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.219222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.219241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.219254 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.322053 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.322109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.322121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.322142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.322154 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.355796 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:36 crc kubenswrapper[4688]: E0123 18:07:36.356070 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.425033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.425083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.425093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.425108 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.425117 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.461794 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.472128 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:48:13.578599121 +0000 UTC Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.487834 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.502928 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.525916 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:26Z\\\",\\\"message\\\":\\\"ding *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 18:07:26.196493 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:07:26.196498 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:26.196518 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:26.196521 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:26.196522 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 18:07:26.196526 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:26.196545 6155 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:26.196551 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:26.196561 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 18:07:26.196910 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:26.196954 6155 factory.go:656] Stopping watch factory\\\\nI0123 18:07:26.196970 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:26.199221 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:26.199328 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:26.199452 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.527469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.527521 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.527529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.527550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.527563 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.538441 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.554751 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.571179 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.586450 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.596770 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.611570 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.628414 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.630312 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.630397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.630415 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.630441 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.630497 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.644234 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.657051 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.679779 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.695631 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.709444 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.721657 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:36Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.733507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.733583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.733603 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.733630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.733658 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.835867 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.835899 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.835908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.835920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.835929 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.937721 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.937773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.937785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.937800 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:36 crc kubenswrapper[4688]: I0123 18:07:36.937811 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:36Z","lastTransitionTime":"2026-01-23T18:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.039709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.039757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.039769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.039785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.039796 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.142426 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.142465 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.142475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.142491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.142503 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.245444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.245494 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.245506 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.245526 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.245541 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.348654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.348747 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.348771 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.348801 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.348823 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.355875 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:37 crc kubenswrapper[4688]: E0123 18:07:37.356100 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.355947 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:37 crc kubenswrapper[4688]: E0123 18:07:37.356336 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.355875 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:37 crc kubenswrapper[4688]: E0123 18:07:37.356509 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.452099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.452166 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.452229 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.452260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.452280 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.473174 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:28:11.179154089 +0000 UTC Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.554571 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.554617 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.554626 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.554639 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.554648 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.657632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.657665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.657676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.657692 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.657703 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.760060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.760111 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.760124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.760141 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.760152 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.863320 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.863628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.863729 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.863828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.863918 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.966093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.966122 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.966133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.966148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:37 crc kubenswrapper[4688]: I0123 18:07:37.966158 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:37Z","lastTransitionTime":"2026-01-23T18:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.069324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.069356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.069368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.069383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.069394 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.177156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.177641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.177720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.177787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.177869 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.281747 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.281790 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.281801 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.281819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.281830 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.356358 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:38 crc kubenswrapper[4688]: E0123 18:07:38.356607 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.384726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.384769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.384781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.384795 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.384806 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.473445 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:41:34.20273816 +0000 UTC Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.487007 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.487055 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.487109 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.487131 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.487146 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.589900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.590370 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.590616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.590838 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.591094 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.646031 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.658427 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.664512 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.678871 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.694501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.694545 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.694554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.694569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.694579 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.701044 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:26Z\\\",\\\"message\\\":\\\"ding *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 18:07:26.196493 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:07:26.196498 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:26.196518 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:26.196521 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:26.196522 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 18:07:26.196526 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:26.196545 6155 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:26.196551 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:26.196561 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 18:07:26.196910 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:26.196954 6155 factory.go:656] Stopping watch factory\\\\nI0123 18:07:26.196970 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:26.199221 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:26.199328 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:26.199452 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.712567 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.723978 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.735492 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.747419 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.756675 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.774005 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.786612 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.797034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.797087 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.797101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.797120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.797132 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.801965 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.815359 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.829391 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.843432 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.857068 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.873709 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:38Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.899566 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.899596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.899604 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.899618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:38 crc kubenswrapper[4688]: I0123 18:07:38.899628 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:38Z","lastTransitionTime":"2026-01-23T18:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.002250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.002295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.002314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.002332 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.002345 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.104693 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.104743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.104751 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.104763 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.104773 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.205052 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.205263 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:11.205225252 +0000 UTC m=+86.201049733 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.207624 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.207750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.207828 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.207916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.208002 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.306926 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.306997 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.307026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.307055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307177 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307257 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:08:11.307239805 +0000 UTC m=+86.303064246 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307443 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307457 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307466 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307494 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:08:11.307482582 +0000 UTC m=+86.303307023 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307578 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307626 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307643 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307673 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307705 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:08:11.307685647 +0000 UTC m=+86.303510148 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.307728 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:08:11.307718058 +0000 UTC m=+86.303542549 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.310295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.310322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.310331 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.310346 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.310354 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.356446 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.356558 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.356593 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.356713 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.356841 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:39 crc kubenswrapper[4688]: E0123 18:07:39.356906 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.412295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.412333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.412344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.412359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.412371 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.474024 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:19:50.909897215 +0000 UTC Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.515140 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.515176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.515201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.515217 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.515226 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.617850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.617893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.617903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.617919 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.617932 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.720598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.720639 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.720675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.720689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.720698 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.823837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.823899 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.823920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.823947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.823968 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.927326 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.927405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.927444 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.927468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:39 crc kubenswrapper[4688]: I0123 18:07:39.927484 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:39Z","lastTransitionTime":"2026-01-23T18:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.029921 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.029965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.029992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.030016 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.030031 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.133578 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.133671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.133696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.133728 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.133751 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.237359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.237446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.237468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.237491 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.237510 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.340467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.340503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.340520 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.340537 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.340548 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.355943 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:40 crc kubenswrapper[4688]: E0123 18:07:40.356044 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.444019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.444150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.444174 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.444227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.444245 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.474805 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:50:32.081476448 +0000 UTC Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.547494 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.547542 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.547558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.547577 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.547591 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.621511 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:40 crc kubenswrapper[4688]: E0123 18:07:40.621656 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:40 crc kubenswrapper[4688]: E0123 18:07:40.621709 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs podName:44e9c4ca-39a2-42f8-aac2-eca60087c3ed nodeName:}" failed. No retries permitted until 2026-01-23 18:07:56.621695505 +0000 UTC m=+71.617519946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs") pod "network-metrics-daemon-kr87l" (UID: "44e9c4ca-39a2-42f8-aac2-eca60087c3ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.650267 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.650308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.650319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.650335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.650348 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.752873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.752910 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.752920 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.752936 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.752948 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.855206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.855244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.855255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.855282 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.855295 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.957520 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.957563 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.957572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.957587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:40 crc kubenswrapper[4688]: I0123 18:07:40.957599 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:40Z","lastTransitionTime":"2026-01-23T18:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.060393 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.060436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.060446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.060461 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.060475 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.163655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.163698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.163710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.163728 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.163740 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.266500 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.266535 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.266544 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.266558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.266569 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.355489 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.355569 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:41 crc kubenswrapper[4688]: E0123 18:07:41.355730 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.355502 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:41 crc kubenswrapper[4688]: E0123 18:07:41.355903 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:41 crc kubenswrapper[4688]: E0123 18:07:41.356080 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.369446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.369486 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.369498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.369516 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.369529 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.472555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.472596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.472611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.472628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.472639 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.475170 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:24:38.38145175 +0000 UTC Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.576554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.576608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.576619 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.576639 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.576651 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.679804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.679855 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.679866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.679884 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.679902 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.782255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.782309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.782318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.782333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.782342 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.885769 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.885813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.885824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.885843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.885878 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.988919 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.988963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.988976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.988994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:41 crc kubenswrapper[4688]: I0123 18:07:41.989007 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:41Z","lastTransitionTime":"2026-01-23T18:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.091587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.091652 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.091665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.091691 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.091702 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.195148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.195221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.195234 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.195257 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.195295 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.297853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.297916 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.297928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.297950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.297986 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.356007 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:42 crc kubenswrapper[4688]: E0123 18:07:42.356174 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.357236 4688 scope.go:117] "RemoveContainer" containerID="f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.400507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.400547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.400560 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.400575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.400587 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.476388 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:25:46.64516921 +0000 UTC Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.503764 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.503807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.503839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.503857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.503866 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.606232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.606284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.606295 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.606314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.606326 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.709397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.709455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.709469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.709495 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.709512 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.812895 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.812960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.812978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.813004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.813020 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.848793 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/1.log" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.852176 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.852835 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.868975 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.882070 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.893732 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.907427 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.915744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.915787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.915798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.915812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.915824 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:42Z","lastTransitionTime":"2026-01-23T18:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.928155 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.943347 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.963681 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:26Z\\\",\\\"message\\\":\\\"ding *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 18:07:26.196493 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:07:26.196498 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:26.196518 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:26.196521 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:26.196522 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 18:07:26.196526 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:26.196545 6155 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:26.196551 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:26.196561 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 18:07:26.196910 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:26.196954 6155 factory.go:656] Stopping watch factory\\\\nI0123 18:07:26.196970 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:26.199221 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:26.199328 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:26.199452 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.977349 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:42 crc kubenswrapper[4688]: I0123 18:07:42.993320 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:42Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.011608 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.018482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.018524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.018535 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.018551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.018560 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.031092 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.053132 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.066061 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.082431 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.100952 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.120155 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.120876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.120946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.120960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.120983 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.120997 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.135690 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.223405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.223478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.223492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.223512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.223525 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.326675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.326710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.326718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.326732 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.326742 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.355731 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.355777 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.355739 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:43 crc kubenswrapper[4688]: E0123 18:07:43.355880 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:43 crc kubenswrapper[4688]: E0123 18:07:43.355975 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:43 crc kubenswrapper[4688]: E0123 18:07:43.356053 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.429176 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.429233 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.429244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.429259 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.429268 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.476669 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:27:38.758060284 +0000 UTC Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.532151 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.532208 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.532218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.532232 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.532242 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.634768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.634814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.634827 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.634846 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.634858 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.737662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.737694 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.737703 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.737718 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.737729 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.841034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.841130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.841142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.841159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.841172 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.862952 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/2.log" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.863593 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/1.log" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.866565 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" exitCode=1 Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.866609 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.866651 4688 scope.go:117] "RemoveContainer" containerID="f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.867884 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:07:43 crc kubenswrapper[4688]: E0123 18:07:43.868249 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.883590 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.902049 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.914626 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.925462 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.940107 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.943607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.943657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.943668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.943685 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.943704 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:43Z","lastTransitionTime":"2026-01-23T18:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.951797 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.964052 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.983249 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f113a9f0e393ee51386e1a6a8050f60e5375d0ac91ea5b90e8550e3094691ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:26Z\\\",\\\"message\\\":\\\"ding *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 18:07:26.196493 6155 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 18:07:26.196498 6155 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 18:07:26.196518 6155 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 18:07:26.196521 6155 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:26.196522 6155 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 18:07:26.196526 6155 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 18:07:26.196545 6155 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 18:07:26.196551 6155 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:26.196561 6155 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 18:07:26.196910 6155 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 18:07:26.196954 6155 factory.go:656] Stopping watch factory\\\\nI0123 18:07:26.196970 6155 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 18:07:26.199221 6155 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:26.199328 6155 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:26.199452 6155 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:43Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0123 18:07:43.212481 6359 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:43.212512 6359 factory.go:656] Stopping watch factory\\\\nI0123 18:07:43.212512 6359 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 18:07:43.212546 6359 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.212547 6359 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:43.212573 6359 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.232553 6359 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 18:07:43.232632 6359 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 18:07:43.232708 6359 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:43.232737 6359 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:43.232834 6359 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:43 crc kubenswrapper[4688]: I0123 18:07:43.996531 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:43Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.011383 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.022995 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.035723 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.046115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.046316 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.046404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.046429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.046442 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.047561 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.064641 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.076929 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.090050 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.102396 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.149645 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.149955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.150164 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.150262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.150333 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.253085 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.253125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.253135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.253150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.253158 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.355417 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.355809 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.355908 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.355950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.355998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.356019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.356073 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.450205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.450418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.450575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.450646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.450701 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.463977 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.468213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.468306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.468368 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.468433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.468494 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.477008 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:54:32.915274809 +0000 UTC Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.481743 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.487601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.487632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.487640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.487653 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.487662 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.503518 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.508405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.508581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.508679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.508773 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.508862 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.523355 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.527495 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.527538 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.527550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.527569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.527579 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.541589 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.542062 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.543991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.544043 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.544055 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.544076 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.544089 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.646551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.646622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.646643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.646670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.646690 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.748911 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.748945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.748955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.748972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.748985 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.852123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.852163 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.852172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.852206 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.852218 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.872207 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/2.log" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.876874 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:07:44 crc kubenswrapper[4688]: E0123 18:07:44.877043 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.890017 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.904318 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.930004 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.955929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.956000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.956019 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.956044 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.956062 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:44Z","lastTransitionTime":"2026-01-23T18:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.962345 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.982313 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:44 crc kubenswrapper[4688]: I0123 18:07:44.996270 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:44Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.007154 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.020057 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.031682 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.050980 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:43Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0123 18:07:43.212481 6359 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:43.212512 6359 factory.go:656] Stopping watch factory\\\\nI0123 18:07:43.212512 6359 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 18:07:43.212546 6359 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.212547 6359 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:43.212573 6359 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.232553 6359 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 18:07:43.232632 6359 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 18:07:43.232708 6359 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:43.232737 6359 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:43.232834 6359 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.058470 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.058508 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.058521 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.058539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.058553 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.063471 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.075375 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.086801 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.098717 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.109141 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.118695 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.132892 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.161409 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.161458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.161472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.161492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.161507 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.263928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.264014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.264038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.264070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.264092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.355423 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.355490 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.355440 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:45 crc kubenswrapper[4688]: E0123 18:07:45.356668 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:45 crc kubenswrapper[4688]: E0123 18:07:45.356997 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:45 crc kubenswrapper[4688]: E0123 18:07:45.357272 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.367416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.367483 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.367496 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.367517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.367554 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.372559 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.392709 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.408687 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.432977 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.448870 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.465140 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.469804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.469844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.469876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.469893 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.469903 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.478085 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:27:08.820123819 +0000 UTC Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.480661 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.491974 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.505810 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.517713 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.530829 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.548534 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.564062 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.574318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.574405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.574449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.574475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.574514 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.577265 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.596117 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:43Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0123 18:07:43.212481 6359 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:43.212512 6359 factory.go:656] Stopping watch factory\\\\nI0123 18:07:43.212512 6359 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 18:07:43.212546 6359 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.212547 6359 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:43.212573 6359 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.232553 6359 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 18:07:43.232632 6359 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 18:07:43.232708 6359 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:43.232737 6359 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:43.232834 6359 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.608495 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.620078 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:45Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.676453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.676512 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.676532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.676556 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.676573 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.779644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.779711 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.779726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.779744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.779759 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.881964 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.882013 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.882027 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.882046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.882063 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.986142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.986435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.986520 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.986635 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:45 crc kubenswrapper[4688]: I0123 18:07:45.986736 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:45Z","lastTransitionTime":"2026-01-23T18:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.090771 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.090806 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.090813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.090827 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.090836 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.193853 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.193904 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.193925 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.193947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.193963 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.296894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.296969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.296993 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.297021 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.297043 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.355877 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:46 crc kubenswrapper[4688]: E0123 18:07:46.356527 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.400244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.400324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.400350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.400381 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.400403 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.479105 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:56:37.791415372 +0000 UTC Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.503435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.503756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.503767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.503781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.503791 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.605379 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.605417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.605429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.605446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.605479 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.707737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.707783 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.707791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.707808 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.707818 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.810542 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.810572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.810581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.810595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.810606 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.913765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.914102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.914247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.914356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:46 crc kubenswrapper[4688]: I0123 18:07:46.914457 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:46Z","lastTransitionTime":"2026-01-23T18:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.017159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.017420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.017482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.017570 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.017628 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.119565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.119658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.119673 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.119699 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.119713 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.222548 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.222588 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.222596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.222611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.222620 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.325088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.325132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.325143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.325159 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.325168 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.355635 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.355779 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:47 crc kubenswrapper[4688]: E0123 18:07:47.355933 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.356112 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:47 crc kubenswrapper[4688]: E0123 18:07:47.356302 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:47 crc kubenswrapper[4688]: E0123 18:07:47.356133 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.427398 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.427430 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.427440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.427455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.427469 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.479668 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:36:38.082841858 +0000 UTC Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.531082 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.531148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.531220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.531253 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.531276 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.634081 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.634124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.634136 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.634152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.634164 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.736764 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.736816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.736833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.736857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.736874 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.840073 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.840119 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.840133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.840149 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.840161 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.942668 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.942714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.942723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.942739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:47 crc kubenswrapper[4688]: I0123 18:07:47.942750 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:47Z","lastTransitionTime":"2026-01-23T18:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.044706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.044742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.044752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.044768 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.044778 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.148558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.148816 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.148882 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.148991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.149063 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.251392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.251435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.251448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.251463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.251474 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.354494 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.354528 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.354538 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.354550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.354561 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.355872 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:48 crc kubenswrapper[4688]: E0123 18:07:48.355981 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.457529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.457593 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.457608 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.457639 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.457666 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.480253 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:53:58.781246563 +0000 UTC Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.560928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.561178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.561263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.561333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.561388 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.664133 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.664201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.664212 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.664234 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.664246 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.767826 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.767892 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.767903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.767923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.767937 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.870216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.870266 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.870281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.870299 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.870312 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.973642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.973706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.973719 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.973739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:48 crc kubenswrapper[4688]: I0123 18:07:48.973756 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:48Z","lastTransitionTime":"2026-01-23T18:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.077366 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.077423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.077434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.077453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.077464 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.180600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.180655 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.180667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.180688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.180701 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.282871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.282923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.282935 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.282954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.282966 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.355596 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.355681 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.355623 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:49 crc kubenswrapper[4688]: E0123 18:07:49.355754 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:49 crc kubenswrapper[4688]: E0123 18:07:49.356100 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:49 crc kubenswrapper[4688]: E0123 18:07:49.356691 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.386323 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.386389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.386401 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.386420 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.386431 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.480834 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:43:44.088460155 +0000 UTC Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.488812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.488849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.488857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.488872 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.488881 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.591289 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.591335 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.591344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.591360 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.591369 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.693576 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.693621 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.693630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.693644 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.693654 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.796402 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.796437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.796447 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.796463 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.796473 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.899250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.899311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.899324 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.899348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:49 crc kubenswrapper[4688]: I0123 18:07:49.899370 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:49Z","lastTransitionTime":"2026-01-23T18:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.007628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.007670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.007680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.007696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.007706 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.110371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.110421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.110434 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.110448 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.110461 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.212944 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.212974 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.212982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.212995 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.213005 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.315992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.316051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.316060 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.316074 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.316083 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.356029 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:50 crc kubenswrapper[4688]: E0123 18:07:50.356286 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.419261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.419308 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.419319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.419342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.419353 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.481931 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:00:06.123176923 +0000 UTC Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.522222 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.522272 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.522283 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.522302 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.522312 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.624676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.624723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.624735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.624750 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.624760 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.727135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.727202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.727213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.727238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.727257 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.830102 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.830142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.830152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.830165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.830176 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.931987 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.932021 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.932031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.932051 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:50 crc kubenswrapper[4688]: I0123 18:07:50.932068 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:50Z","lastTransitionTime":"2026-01-23T18:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.035124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.035172 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.035182 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.035221 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.035234 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.136921 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.136975 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.136989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.137008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.137022 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.239784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.239833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.239846 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.239863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.239874 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.342529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.342573 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.342583 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.342600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.342609 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.356050 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.356049 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:51 crc kubenswrapper[4688]: E0123 18:07:51.356201 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:51 crc kubenswrapper[4688]: E0123 18:07:51.356277 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.356072 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:51 crc kubenswrapper[4688]: E0123 18:07:51.356350 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.444940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.444997 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.445012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.445031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.445043 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.483010 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:50:20.896997725 +0000 UTC Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.547622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.547672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.547684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.547701 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.547712 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.650824 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.650852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.650860 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.650874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.650885 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.753954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.753998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.754009 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.754048 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.754062 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.857661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.857707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.857720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.857740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.857754 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.960455 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.960497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.960507 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.960524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:51 crc kubenswrapper[4688]: I0123 18:07:51.960539 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:51Z","lastTransitionTime":"2026-01-23T18:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.063065 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.063132 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.063148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.063165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.063272 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.166004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.166089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.166103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.166124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.166139 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.268894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.268934 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.268946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.268969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.268982 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.355393 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:52 crc kubenswrapper[4688]: E0123 18:07:52.355555 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.371246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.371282 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.371292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.371307 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.371318 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.473669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.473714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.473726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.473743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.473755 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.483998 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:20:08.931259546 +0000 UTC Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.576577 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.576609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.576616 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.576630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.576638 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.678954 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.678991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.678999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.679012 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.679020 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.781620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.781661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.781671 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.781691 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.781708 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.884077 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.884120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.884129 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.884143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.884152 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.987130 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.987169 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.987178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.987205 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:52 crc kubenswrapper[4688]: I0123 18:07:52.987214 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:52Z","lastTransitionTime":"2026-01-23T18:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.088933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.089245 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.089258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.089275 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.089598 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.191617 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.191654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.191663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.191677 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.191685 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.294099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.294150 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.294193 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.294207 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.294216 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.355730 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.355819 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:53 crc kubenswrapper[4688]: E0123 18:07:53.355856 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:53 crc kubenswrapper[4688]: E0123 18:07:53.356035 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.356050 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:53 crc kubenswrapper[4688]: E0123 18:07:53.356238 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.397129 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.397177 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.397209 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.397227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.397240 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.484622 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:27:26.029047245 +0000 UTC Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.500276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.500306 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.500317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.500334 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.500347 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.602913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.602960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.602972 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.602991 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.603004 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.705064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.705115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.705127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.705144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.705155 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.807730 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.807757 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.807767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.807782 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.807792 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.910498 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.910588 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.910605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.910625 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:53 crc kubenswrapper[4688]: I0123 18:07:53.910639 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:53Z","lastTransitionTime":"2026-01-23T18:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.013204 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.013238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.013248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.013261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.013271 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.115696 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.115739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.115756 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.115778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.115796 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.218574 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.218640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.218648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.218662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.218671 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.321784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.321818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.321839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.321861 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.321878 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.355903 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:54 crc kubenswrapper[4688]: E0123 18:07:54.356070 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.424456 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.424710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.424774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.424845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.424905 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.485538 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:59:41.749630812 +0000 UTC Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.528057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.528092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.528101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.528117 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.528128 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.630565 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.630601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.630632 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.630648 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.630657 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.732965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.732994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.733004 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.733041 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.733051 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.835581 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.835670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.835686 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.835704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.835719 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.864722 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.864787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.864797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.864814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.864825 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: E0123 18:07:54.877598 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:54Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.881963 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.881989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.882000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.882015 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.882025 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: E0123 18:07:54.892761 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:54Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.895956 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.895990 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.896003 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.896021 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.896034 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: E0123 18:07:54.908611 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:54Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.911902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.911930 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.911940 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.911955 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.911966 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: E0123 18:07:54.923583 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:54Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.927737 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.927785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.927797 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.927819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.927833 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:54 crc kubenswrapper[4688]: E0123 18:07:54.939726 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:54Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:54 crc kubenswrapper[4688]: E0123 18:07:54.939837 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.941383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.941407 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.941418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.941433 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:54 crc kubenswrapper[4688]: I0123 18:07:54.941442 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:54Z","lastTransitionTime":"2026-01-23T18:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.043849 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.043888 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.043913 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.043929 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.043938 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.146844 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.146883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.146894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.146909 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.146919 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.250108 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.250161 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.250181 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.250219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.250235 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.352788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.352822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.352831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.352845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.352854 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.355497 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:55 crc kubenswrapper[4688]: E0123 18:07:55.355577 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.355891 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:55 crc kubenswrapper[4688]: E0123 18:07:55.355957 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.356126 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:55 crc kubenswrapper[4688]: E0123 18:07:55.356198 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.370503 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.388551 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.402157 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.418865 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.439506 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:43Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0123 18:07:43.212481 6359 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:43.212512 6359 factory.go:656] Stopping watch factory\\\\nI0123 18:07:43.212512 6359 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 18:07:43.212546 6359 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.212547 6359 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:43.212573 6359 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.232553 6359 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 18:07:43.232632 6359 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 18:07:43.232708 6359 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:43.232737 6359 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:43.232834 6359 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.452237 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.455142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.455211 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.455224 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.455243 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.455255 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.466514 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.476833 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.486669 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:04:19.503253576 +0000 UTC Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.490769 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.500838 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.521979 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.537156 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.550843 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.557348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.557392 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.557404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.557421 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.557435 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.564420 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.575542 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.589805 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.606106 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:07:55Z is after 2025-08-24T17:21:41Z" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.659238 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.659282 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.659290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.659305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.659315 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.761667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.761702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.761714 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.761731 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.761742 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.863669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.863702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.863711 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.863724 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.863733 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.967605 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.967657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.967670 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.967688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:55 crc kubenswrapper[4688]: I0123 18:07:55.967702 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:55Z","lastTransitionTime":"2026-01-23T18:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.069939 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.069986 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.069998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.070020 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.070031 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.172318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.172350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.172360 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.172375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.172384 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.274734 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.274775 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.274787 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.274804 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.274816 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.355302 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:56 crc kubenswrapper[4688]: E0123 18:07:56.355475 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.377143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.377202 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.377213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.377229 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.377240 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.479823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.479887 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.479897 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.479917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.479927 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.487349 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:16:41.522271233 +0000 UTC Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.582023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.582072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.582084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.582101 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.582114 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.669497 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:56 crc kubenswrapper[4688]: E0123 18:07:56.669659 4688 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:56 crc kubenswrapper[4688]: E0123 18:07:56.669752 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs podName:44e9c4ca-39a2-42f8-aac2-eca60087c3ed nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.669730408 +0000 UTC m=+103.665554849 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs") pod "network-metrics-daemon-kr87l" (UID: "44e9c4ca-39a2-42f8-aac2-eca60087c3ed") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.684761 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.684821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.684845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.684868 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.684882 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.787070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.787129 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.787139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.787156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.787224 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.889502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.889558 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.889574 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.889596 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.889616 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.992766 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.992821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.992833 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.992850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:56 crc kubenswrapper[4688]: I0123 18:07:56.992862 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:56Z","lastTransitionTime":"2026-01-23T18:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.095487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.095524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.095534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.095551 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.095562 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.201555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.201612 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.201622 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.201643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.201653 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.304151 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.304201 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.304210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.304227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.304236 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.355475 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.355588 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:57 crc kubenswrapper[4688]: E0123 18:07:57.355646 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:57 crc kubenswrapper[4688]: E0123 18:07:57.355720 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.355495 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:57 crc kubenswrapper[4688]: E0123 18:07:57.355796 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.406589 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.406637 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.406646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.406663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.406672 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.487909 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 06:48:59.142464586 +0000 UTC Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.508586 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.508636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.508645 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.508659 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.508668 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.610928 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.611005 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.611032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.611063 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.611085 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.714114 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.714168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.714203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.714223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.714234 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.817385 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.817879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.817965 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.818057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.818131 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.920493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.920755 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.920864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.920957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:57 crc kubenswrapper[4688]: I0123 18:07:57.921026 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:57Z","lastTransitionTime":"2026-01-23T18:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.023362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.023400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.023411 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.023425 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.023437 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.125749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.125799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.125814 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.125836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.125848 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.229070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.229125 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.229137 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.229158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.229172 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.331803 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.331852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.331866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.331885 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.331899 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.355869 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:07:58 crc kubenswrapper[4688]: E0123 18:07:58.356011 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.433982 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.434026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.434037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.434054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.434066 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.488467 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:07:14.731553352 +0000 UTC Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.535819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.535890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.535902 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.535918 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.535929 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.638219 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.638254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.638265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.638278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.638287 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.741606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.741651 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.741664 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.741681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.741693 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.845624 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.845656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.845665 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.845680 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.845689 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.947603 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.947634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.947642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.947656 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:58 crc kubenswrapper[4688]: I0123 18:07:58.947665 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:58Z","lastTransitionTime":"2026-01-23T18:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.049225 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.049260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.049276 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.049294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.049305 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.151978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.152030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.152056 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.152078 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.152092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.255079 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.255148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.255168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.255234 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.255256 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.356259 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.356261 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.356685 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:07:59 crc kubenswrapper[4688]: E0123 18:07:59.356814 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:07:59 crc kubenswrapper[4688]: E0123 18:07:59.357007 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:07:59 crc kubenswrapper[4688]: E0123 18:07:59.357459 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.357607 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:07:59 crc kubenswrapper[4688]: E0123 18:07:59.357734 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.357980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.358014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.358026 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.358042 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.358053 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.368431 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.460765 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.460933 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.460953 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.460979 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.461145 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.489173 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:49:21.034916963 +0000 UTC Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.568237 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.568539 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.568650 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.568754 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.568836 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.672886 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.672950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.672968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.672996 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.673014 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.775522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.775579 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.775591 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.775611 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.775624 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.878627 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.878917 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.879031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.879148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.879409 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.983422 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.983731 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.983807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.983876 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:07:59 crc kubenswrapper[4688]: I0123 18:07:59.983947 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:07:59Z","lastTransitionTime":"2026-01-23T18:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.087000 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.087358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.087440 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.087517 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.087590 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.189469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.189522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.189534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.189552 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.189566 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.292441 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.292891 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.293037 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.293170 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.293367 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.356275 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:00 crc kubenswrapper[4688]: E0123 18:08:00.356435 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.395976 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.396034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.396049 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.396072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.396087 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.490168 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:25:27.568053722 +0000 UTC Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.499642 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.499690 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.499702 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.499720 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.499734 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.602350 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.602405 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.602424 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.602450 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.602469 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.705487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.705793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.706030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.706265 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.706576 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.809391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.809439 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.809453 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.809471 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.809486 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.912303 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.912658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.912739 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.912835 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:00 crc kubenswrapper[4688]: I0123 18:08:00.912927 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:00Z","lastTransitionTime":"2026-01-23T18:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.015688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.015729 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.015738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.015755 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.015765 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.118575 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.118636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.118654 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.118676 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.118696 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.220772 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.220819 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.220831 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.220843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.220853 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.323774 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.323829 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.323846 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.323864 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.323875 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.356274 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.356320 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:01 crc kubenswrapper[4688]: E0123 18:08:01.356388 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.356338 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:01 crc kubenswrapper[4688]: E0123 18:08:01.356633 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:01 crc kubenswrapper[4688]: E0123 18:08:01.356683 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.426601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.426649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.426666 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.426688 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.426705 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.490955 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:39:12.338808947 +0000 UTC Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.530064 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.530135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.530152 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.530179 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.530270 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.632843 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.632883 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.632894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.632908 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.632928 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.735788 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.735839 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.735850 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.735866 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.735900 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.838483 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.838522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.838530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.838547 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.838556 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.940791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.940841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.940852 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.940869 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:01 crc kubenswrapper[4688]: I0123 18:08:01.940881 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:01Z","lastTransitionTime":"2026-01-23T18:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.044818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.044862 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.044873 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.044890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.044902 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.148305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.148351 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.148360 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.148374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.148384 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.251003 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.251121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.251143 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.251167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.251207 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.354662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.354710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.354723 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.354743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.354755 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.355271 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:02 crc kubenswrapper[4688]: E0123 18:08:02.355425 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.458595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.458667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.458685 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.458709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.458730 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.491283 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:15:07.015205078 +0000 UTC Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.561452 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.561513 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.561532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.561557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.561574 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.665356 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.665460 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.665472 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.665497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.665509 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.768054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.768153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.768171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.768216 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.768232 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.871158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.871244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.871262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.871287 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.871471 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.938537 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gf4sc_39fdea6e-e9b8-4fb4-9375-aaf302a204d3/kube-multus/0.log" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.938598 4688 generic.go:334] "Generic (PLEG): container finished" podID="39fdea6e-e9b8-4fb4-9375-aaf302a204d3" containerID="18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890" exitCode=1 Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.938634 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gf4sc" event={"ID":"39fdea6e-e9b8-4fb4-9375-aaf302a204d3","Type":"ContainerDied","Data":"18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.939053 4688 scope.go:117] "RemoveContainer" containerID="18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.957878 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:02Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.973911 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.973957 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.973969 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.973988 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.974001 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:02Z","lastTransitionTime":"2026-01-23T18:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.977148 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:43Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0123 18:07:43.212481 6359 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:43.212512 6359 factory.go:656] Stopping watch factory\\\\nI0123 18:07:43.212512 6359 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 18:07:43.212546 6359 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.212547 6359 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:43.212573 6359 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.232553 6359 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 18:07:43.232632 6359 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 18:07:43.232708 6359 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:43.232737 6359 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:43.232834 6359 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:02Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:02 crc kubenswrapper[4688]: I0123 18:08:02.988010 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:02Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.003516 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:02Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.018571 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.032120 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.043923 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.064171 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.074859 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.076342 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.076372 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.076383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.076397 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.076412 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.087526 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.098701 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:08:02Z\\\",\\\"message\\\":\\\"2026-01-23T18:07:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_033754b6-fc20-4217-a065-09c93b44a418\\\\n2026-01-23T18:07:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_033754b6-fc20-4217-a065-09c93b44a418 to /host/opt/cni/bin/\\\\n2026-01-23T18:07:17Z [verbose] multus-daemon started\\\\n2026-01-23T18:07:17Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:08:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.108863 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.119409 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c175e54-ae62-42b0-9b60-4e5db7a43d73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959d36fc44bef9ec6f26f5c4838620200e14b4bfcdcb049544374118a5ec07f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.130784 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.142285 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.154723 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.207210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.207252 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.207262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.207278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.207291 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.210243 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.226246 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.311156 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.311261 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.311285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.311314 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.311332 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.356171 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.356309 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.356224 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:03 crc kubenswrapper[4688]: E0123 18:08:03.356406 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:03 crc kubenswrapper[4688]: E0123 18:08:03.356513 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:03 crc kubenswrapper[4688]: E0123 18:08:03.356662 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.415404 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.415475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.415493 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.415518 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.415536 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.492384 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 07:33:53.409207042 +0000 UTC Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.524153 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.524435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.524518 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.524606 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.524683 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.626947 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.627180 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.627273 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.627348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.627418 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.730752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.730798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.730812 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.730829 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.730841 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.834669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.834727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.834749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.834777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.834799 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.937215 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.937243 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.937252 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.937264 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.937272 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:03Z","lastTransitionTime":"2026-01-23T18:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.944035 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gf4sc_39fdea6e-e9b8-4fb4-9375-aaf302a204d3/kube-multus/0.log" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.944088 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gf4sc" event={"ID":"39fdea6e-e9b8-4fb4-9375-aaf302a204d3","Type":"ContainerStarted","Data":"12722219e8098865c349ba7cb9cc6b83b50eda61f6c7da981cce9c870b9f4056"} Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.959044 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.973214 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c175e54-ae62-42b0-9b60-4e5db7a43d73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959d36fc44bef9ec6f26f5c4838620200e14b4bfcdcb049544374118a5ec07f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:03 crc kubenswrapper[4688]: I0123 18:08:03.988256 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:03Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.003955 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12722219e8098865c349ba7cb9cc6b83b50eda61f6c7da981cce9c870b9f4056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:08:02Z\\\",\\\"message\\\":\\\"2026-01-23T18:07:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_033754b6-fc20-4217-a065-09c93b44a418\\\\n2026-01-23T18:07:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_033754b6-fc20-4217-a065-09c93b44a418 to /host/opt/cni/bin/\\\\n2026-01-23T18:07:17Z [verbose] multus-daemon started\\\\n2026-01-23T18:07:17Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:08:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.015997 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.029841 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.040369 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.040410 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.040418 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.040435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.040446 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.042608 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.059152 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.080079 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:43Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0123 18:07:43.212481 6359 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:43.212512 6359 factory.go:656] Stopping watch factory\\\\nI0123 18:07:43.212512 6359 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 18:07:43.212546 6359 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.212547 6359 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:43.212573 6359 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.232553 6359 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 18:07:43.232632 6359 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 18:07:43.232708 6359 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:43.232737 6359 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:43.232834 6359 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.092458 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.107425 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.120904 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.135401 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.143058 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.143099 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.143110 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.143127 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.143516 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.146995 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.209611 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.222123 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.234855 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.245553 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.245601 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.245613 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.245634 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.245683 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.254209 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:04Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.349029 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.349328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.349468 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.349597 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.349685 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.355649 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:04 crc kubenswrapper[4688]: E0123 18:08:04.355840 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.452738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.453070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.453292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.453423 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.453540 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.493233 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:34:35.961044952 +0000 UTC Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.556347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.556376 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.556383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.556395 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.556403 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.658998 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.659038 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.659054 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.659075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.659092 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.761922 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.761964 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.761978 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.761996 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.762008 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.864311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.864588 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.864684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.864777 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.864860 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.966907 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.966937 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.966946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.966960 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:04 crc kubenswrapper[4688]: I0123 18:08:04.966969 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:04Z","lastTransitionTime":"2026-01-23T18:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.071296 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.071348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.071357 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.071371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.071382 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.124894 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.125305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.125489 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.125633 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.125764 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.147671 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.153672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.153936 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.154347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.154726 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.154946 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.183672 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.188309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.188408 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.188419 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.188458 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.188472 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.202675 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.207030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.207070 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.207085 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.207103 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.207118 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.219878 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.224554 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.224836 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.225057 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.225298 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.225515 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.241835 4688 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8158c768-9e42-4de1-98a8-b8ec3e55c3b3\\\",\\\"systemUUID\\\":\\\"4ae7c631-0e1b-4025-81f0-d80cccca604c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.242032 4688 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.244092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.244120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.244128 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.244145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.244155 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.347870 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.347945 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.347967 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.347999 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.348021 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.356333 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.356400 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.356333 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.356534 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.358150 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:05 crc kubenswrapper[4688]: E0123 18:08:05.358424 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.375673 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pnr5l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb2218fb-8676-431e-b257-a3c9388095b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee27ba2f36095ede6b2711d6eee9ec3d82728d43130e0768682f604872e7e4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-72qpt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pnr5l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.391965 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dc60235a-56ea-4b78-aec3-486ba53382dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T18:07:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW0123 18:07:06.608731 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 18:07:06.608967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 18:07:06.610450 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3610486280/tls.crt::/tmp/serving-cert-3610486280/tls.key\\\\\\\"\\\\nI0123 18:07:06.884500 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 18:07:06.890746 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 18:07:06.890769 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 18:07:06.890795 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 18:07:06.890802 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 18:07:06.908777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 18:07:06.908825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 18:07:06.908792 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 18:07:06.908831 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 18:07:06.908851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 18:07:06.908856 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 18:07:06.908861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 18:07:06.908865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 18:07:06.910536 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.405944 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.419885 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"282fed6d-4a28-4498-add6-0240e6414dc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f300255a9ed90d7271e4053db55209fe24eace7798cd16c856606ee3cee68117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-86jc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nkhx2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.439022 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"336645d6-da82-4dba-9436-4196367fb547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:07:43Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0123 18:07:43.212481 6359 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 18:07:43.212512 6359 factory.go:656] Stopping watch factory\\\\nI0123 18:07:43.212512 6359 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0123 18:07:43.212546 6359 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.212547 6359 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 18:07:43.212573 6359 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 18:07:43.232553 6359 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 18:07:43.232632 6359 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 18:07:43.232708 6359 ovnkube.go:599] Stopped ovnkube\\\\nI0123 18:07:43.232737 6359 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 18:07:43.232834 6359 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zsqbq_openshift-ovn-kubernetes(336645d6-da82-4dba-9436-4196367fb547)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5sgmr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zsqbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.450286 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.450340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.450352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.450371 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.450383 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.452249 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kr87l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp9sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kr87l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.465409 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31840f40d7f881992c2cab874e6fa8107444ccd85abcfaa009b66b6a89c6a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.477386 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb9c08f5e8acc2a51264623e933782926ad78c6e34a072af30cce9a13c464c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.490324 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82af51fb0b014fc48b0d115281cacb2a82534a8859317b62836a17c09686e7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65bc5c3d11c8f34ab5fc4449addb8bf7c6a846dea26aa6919004a67061f1a615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.493445 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:34:21.800442134 +0000 UTC Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.500412 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fw8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"988366e9-b0b9-4785-ad68-185a42d66bc8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c326f619fac288931dd1e029ab21e7334b749ac68a4542d244287e47c6fbc1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8vw27\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fw8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.512895 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8eabdd33-ae30-4252-8c4e-d016bcfe53fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9069161a0a10f8abf8bda2ef01a4ee1e34ef7ea2f50a6a53897b6792f818b27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c127735a216978f1e1d49643f831f3a7de9968ba22f4576907458efb67fe00e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63a2f4bc2af52d9f04d78e4556eab83c801ebad369d2c4dcfe2156b32a2e0758\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://204967c4121aeaf3790207b339aefc059b4bf10f98bfac75f00e32e63a1477a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a16b5faaab3f5502b69d85bd16553db3bbace510c5fcaca835b2cf009647295a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a19d4640488fdaf0c0a464889542d8253f7e4b2e1c139ad7ef8493a6ee8f021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://640164cdb84a9ed7a35991f36c544e59b42abd935295cde600e09746984ad3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:07:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j757h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nsp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.523061 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7ae5b6f8-03b9-4a8a-a2eb-2179c7669d77\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5be62dce548695ac09e837076d7aeaf3dc8568bbc64b834ec41a7d3a810e7b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d314cdee6052ff4f07d5df3a0474f543606c08a47e463a3868689e091901bc72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77b300cf680001af2a3afc301fb7a2108c518e265b3bb77fdf2de321e9a2ea0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c0950b82e064f34ab7cf89eecd77dc43c5e51336ddbdaae116c9d809c0be787\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.535791 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.554338 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.554749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.554825 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.554527 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.554903 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.555071 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.568655 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19b8e7b2-0a7a-40e8-b5f0-b6a224f00f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://120e77e137da5e9a22fac138eff7d9a446aaea469ebeb59171b05f5bf529a722\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b9637a7649101fa98c6be0b32852da0ab91df58a29e59d1ea3c26656e340b70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b7784\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2s8n4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.579541 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c175e54-ae62-42b0-9b60-4e5db7a43d73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959d36fc44bef9ec6f26f5c4838620200e14b4bfcdcb049544374118a5ec07f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4c4d670408fc6bafdd2249b9af679921e010ade9f398dfdbc786865dc3881f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T18:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.589494 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f4b6a4-2537-4123-a9e1-a6fe591d06b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:06:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6d5886bb56fd155d1b4e91662071a5771031dffd532d7eacb54363d60547d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdb3fe6f6dec45bd8ce541694892fddb505cb21647829fe187c3496e5d826e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4106d70abcb3e472d0bca95b62b2c409fd0b4d8e26c302c9787d572c6b9e5f7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:06:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.600002 4688 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gf4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39fdea6e-e9b8-4fb4-9375-aaf302a204d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:07:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T18:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12722219e8098865c349ba7cb9cc6b83b50eda61f6c7da981cce9c870b9f4056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T18:08:02Z\\\",\\\"message\\\":\\\"2026-01-23T18:07:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_033754b6-fc20-4217-a065-09c93b44a418\\\\n2026-01-23T18:07:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_033754b6-fc20-4217-a065-09c93b44a418 to /host/opt/cni/bin/\\\\n2026-01-23T18:07:17Z [verbose] multus-daemon started\\\\n2026-01-23T18:07:17Z [verbose] Readiness Indicator file check\\\\n2026-01-23T18:08:02Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T18:07:10Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T18:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdq66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T18:07:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gf4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T18:08:05Z is after 2025-08-24T17:21:41Z" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.658168 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.658247 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.658259 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.658284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.658299 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.760980 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.761383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.761550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.761695 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.761825 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.864681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.864742 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.864758 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.864778 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.864792 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.967740 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.967799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.967842 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.967946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:05 crc kubenswrapper[4688]: I0123 18:08:05.967997 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:05Z","lastTransitionTime":"2026-01-23T18:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.071096 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.071171 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.071223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.071255 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.071275 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.174290 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.174347 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.174362 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.174383 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.174398 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.277457 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.277519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.277537 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.277567 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.277584 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.356353 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:06 crc kubenswrapper[4688]: E0123 18:08:06.356630 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.380389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.380446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.380464 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.380492 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.380511 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.483094 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.483138 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.483148 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.483162 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.483171 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.494547 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:49:20.804933595 +0000 UTC Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.585502 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.585544 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.585555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.585572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.585584 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.689008 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.689089 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.689120 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.689145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.689163 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.791607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.791640 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.791651 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.791667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.791679 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.894400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.894439 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.894446 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.894461 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.894497 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.996984 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.997068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.997084 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.997106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:06 crc kubenswrapper[4688]: I0123 18:08:06.997120 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:06Z","lastTransitionTime":"2026-01-23T18:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.102244 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.102304 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.102318 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.102336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.102348 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.205837 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.206023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.206058 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.206087 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.206111 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.309821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.309923 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.309948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.310032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.310055 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.355537 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.355630 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:07 crc kubenswrapper[4688]: E0123 18:08:07.355651 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:07 crc kubenswrapper[4688]: E0123 18:08:07.355757 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.355825 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:07 crc kubenswrapper[4688]: E0123 18:08:07.356040 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.412603 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.412651 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.412662 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.412679 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.412690 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.495593 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:57:41.890328137 +0000 UTC Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.515783 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.515841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.515857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.515878 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.515892 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.618344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.618403 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.618412 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.618436 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.618446 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.720705 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.720749 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.720760 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.720776 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.720791 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.823744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.823785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.823796 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.823813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.823826 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.926689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.926753 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.926770 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.926793 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:07 crc kubenswrapper[4688]: I0123 18:08:07.926810 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:07Z","lastTransitionTime":"2026-01-23T18:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.029124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.029251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.029268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.029292 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.029310 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.132744 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.132818 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.132841 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.132871 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.132893 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.237123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.237213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.237227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.237248 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.237266 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.340047 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.340093 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.340104 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.340123 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.340136 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.356363 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:08 crc kubenswrapper[4688]: E0123 18:08:08.360343 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.444032 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.444134 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.444146 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.444223 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.444242 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.495836 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 15:01:34.198884867 +0000 UTC Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.546712 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.546785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.546798 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.546822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.546844 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.650135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.650246 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.650285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.650310 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.650327 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.753229 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.753305 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.753329 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.753358 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.753381 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.855942 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.856046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.856075 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.856112 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.856138 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.958681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.958762 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.958785 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.958813 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:08 crc kubenswrapper[4688]: I0123 18:08:08.958838 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:08Z","lastTransitionTime":"2026-01-23T18:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.061478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.061518 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.061529 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.061549 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.061561 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.164210 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.164251 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.164260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.164274 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.164284 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.267145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.267203 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.267218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.267233 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.267244 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.355572 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.355646 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:09 crc kubenswrapper[4688]: E0123 18:08:09.355727 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.355661 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:09 crc kubenswrapper[4688]: E0123 18:08:09.356092 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:09 crc kubenswrapper[4688]: E0123 18:08:09.356459 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.369707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.369779 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.369799 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.369821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.369840 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.473218 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.473268 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.473281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.473298 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.473310 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.496664 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:27:19.353488425 +0000 UTC Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.576738 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.576822 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.576847 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.576878 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.576901 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.680857 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.680914 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.680927 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.680944 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.680960 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.784789 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.784845 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.784863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.784890 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.784911 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.888145 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.888236 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.888263 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.888285 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.888299 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.991344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.991396 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.991413 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.991437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:09 crc kubenswrapper[4688]: I0123 18:08:09.991457 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:09Z","lastTransitionTime":"2026-01-23T18:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.094807 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.094863 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.094879 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.094900 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.094915 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.197938 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.197973 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.198003 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.198018 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.198029 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.301258 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.301317 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.301328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.301344 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.301357 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.355424 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:10 crc kubenswrapper[4688]: E0123 18:08:10.355623 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.404374 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.404435 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.404454 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.404484 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.404510 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.497842 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 18:28:25.519551606 +0000 UTC Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.507649 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.507729 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.507764 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.507791 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.507808 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.610577 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.610667 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.610706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.610735 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.610757 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.713951 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.713992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.714030 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.714045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.714055 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.817391 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.817470 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.817503 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.817532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.817554 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.920399 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.920497 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.920524 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.920557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:10 crc kubenswrapper[4688]: I0123 18:08:10.920587 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:10Z","lastTransitionTime":"2026-01-23T18:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.023630 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.023698 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.023717 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.023745 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.023764 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.127291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.127475 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.128071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.128158 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.128539 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.226497 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.226706 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:09:15.226686871 +0000 UTC m=+150.222511332 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.231600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.231658 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.231681 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.231709 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.231731 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.327918 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.328024 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.328092 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328142 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.328177 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328233 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328358 4688 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328383 4688 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328451 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 18:09:15.328418228 +0000 UTC m=+150.324242709 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328546 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328556 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:09:15.328471669 +0000 UTC m=+150.324296150 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328577 4688 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328599 4688 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328594 4688 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328677 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 18:09:15.328654584 +0000 UTC m=+150.324479055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.328757 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 18:09:15.328726596 +0000 UTC m=+150.324551037 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.335121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.335167 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.335178 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.335213 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.335226 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.355724 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.355807 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.355825 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.356002 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.356232 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:11 crc kubenswrapper[4688]: E0123 18:08:11.357470 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.438139 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.438226 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.438241 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.438259 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.438270 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.498038 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:51:56.451928478 +0000 UTC Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.541534 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.541586 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.541618 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.541641 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.541654 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.643989 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.644025 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.644033 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.644045 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.644054 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.746481 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.746530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.746541 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.746555 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.746566 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.849530 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.849569 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.849580 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.849595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.849606 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.954011 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.954071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.954083 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.954106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:11 crc kubenswrapper[4688]: I0123 18:08:11.954119 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:11Z","lastTransitionTime":"2026-01-23T18:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.057968 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.058052 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.058071 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.058487 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.058754 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.161284 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.161325 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.161337 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.161352 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.161362 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.263522 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.263595 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.263607 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.263628 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.263642 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.355356 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:12 crc kubenswrapper[4688]: E0123 18:08:12.355525 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.357321 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.368050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.368108 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.368121 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.368142 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.368156 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.470946 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.471002 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.471014 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.471031 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.471042 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.498740 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:26:11.8459462 +0000 UTC Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.574252 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.574301 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.574309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.574327 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.574338 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.677617 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.677672 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.677684 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.677704 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.677719 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.780600 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.780646 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.780657 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.780675 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.780685 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.883994 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.884072 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.884092 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.884124 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.884149 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.978609 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/2.log" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.982876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerStarted","Data":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.983530 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.987291 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.987328 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.987416 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.987437 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:12 crc kubenswrapper[4688]: I0123 18:08:12.987449 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:12Z","lastTransitionTime":"2026-01-23T18:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.037886 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=14.037854075 podStartE2EDuration="14.037854075s" podCreationTimestamp="2026-01-23 18:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.011593445 +0000 UTC m=+88.007417876" watchObservedRunningTime="2026-01-23 18:08:13.037854075 +0000 UTC m=+88.033678516" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.056808 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gf4sc" podStartSLOduration=64.056784349 podStartE2EDuration="1m4.056784349s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.056588463 +0000 UTC m=+88.052412904" watchObservedRunningTime="2026-01-23 18:08:13.056784349 +0000 UTC m=+88.052608790" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.057019 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=62.056988344 podStartE2EDuration="1m2.056988344s" podCreationTimestamp="2026-01-23 18:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.040719421 +0000 UTC m=+88.036543872" watchObservedRunningTime="2026-01-23 18:08:13.056988344 +0000 UTC m=+88.052812785" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.092135 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.092165 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.092175 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.092227 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.092242 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.094744 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2s8n4" podStartSLOduration=64.094724409 podStartE2EDuration="1m4.094724409s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.073036201 +0000 UTC m=+88.068860642" watchObservedRunningTime="2026-01-23 18:08:13.094724409 +0000 UTC m=+88.090548850" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.111798 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.111763672 podStartE2EDuration="1m6.111763672s" podCreationTimestamp="2026-01-23 18:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.095016816 +0000 UTC m=+88.090841267" watchObservedRunningTime="2026-01-23 18:08:13.111763672 +0000 UTC m=+88.107588113" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.147400 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podStartSLOduration=64.14737633 podStartE2EDuration="1m4.14737633s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.133414778 +0000 UTC m=+88.129239219" watchObservedRunningTime="2026-01-23 18:08:13.14737633 +0000 UTC m=+88.143200771" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.160026 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pnr5l" podStartSLOduration=65.159999116 podStartE2EDuration="1m5.159999116s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.147769491 +0000 UTC m=+88.143593962" watchObservedRunningTime="2026-01-23 18:08:13.159999116 +0000 UTC m=+88.155823557" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.195311 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.195636 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.195731 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.195821 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.195888 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.216131 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podStartSLOduration=64.21610953 podStartE2EDuration="1m4.21610953s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.215773101 +0000 UTC m=+88.211597582" watchObservedRunningTime="2026-01-23 18:08:13.21610953 +0000 UTC m=+88.211933971" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.251835 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6nsp2" podStartSLOduration=64.25181933 podStartE2EDuration="1m4.25181933s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.251242975 +0000 UTC m=+88.247067416" watchObservedRunningTime="2026-01-23 18:08:13.25181933 +0000 UTC m=+88.247643771" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.252083 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fw8bl" podStartSLOduration=65.252079027 podStartE2EDuration="1m5.252079027s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.229539447 +0000 UTC m=+88.225363888" watchObservedRunningTime="2026-01-23 18:08:13.252079027 +0000 UTC m=+88.247903468" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.298220 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.298250 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.298262 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.298294 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.298309 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.299694 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.299681095 podStartE2EDuration="35.299681095s" podCreationTimestamp="2026-01-23 18:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:13.280065922 +0000 UTC m=+88.275890383" watchObservedRunningTime="2026-01-23 18:08:13.299681095 +0000 UTC m=+88.295505536" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.355486 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.355549 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.355588 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:13 crc kubenswrapper[4688]: E0123 18:08:13.355636 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:13 crc kubenswrapper[4688]: E0123 18:08:13.355749 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:13 crc kubenswrapper[4688]: E0123 18:08:13.355823 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.401390 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.401620 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.401724 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.401802 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.401859 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.499217 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:02:25.983847434 +0000 UTC Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.504333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.504557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.504700 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.504823 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.504919 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.607669 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.607950 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.608023 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.608106 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.608204 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.715429 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.715469 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.715482 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.715501 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.715514 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.819046 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.819088 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.819097 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.819115 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.819128 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.921587 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.921846 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.921948 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.922034 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:13 crc kubenswrapper[4688]: I0123 18:08:13.922108 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:13Z","lastTransitionTime":"2026-01-23T18:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.025278 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.025336 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.025348 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.025363 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.025374 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.127557 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.127598 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.127610 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.127629 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.127643 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.230874 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.231254 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.231394 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.231505 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.231594 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.252806 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kr87l"] Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.253011 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:14 crc kubenswrapper[4688]: E0123 18:08:14.253150 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kr87l" podUID="44e9c4ca-39a2-42f8-aac2-eca60087c3ed" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.335651 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.335697 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.335710 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.335727 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.335748 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.439359 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.439389 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.439400 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.439417 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.439429 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.500326 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:52:26.541670826 +0000 UTC Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.541609 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.541678 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.541689 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.541706 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.541718 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.644580 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.644629 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.644643 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.644661 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.644675 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.746663 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.746743 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.746781 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.746803 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.746815 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.848992 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.849039 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.849050 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.849068 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.849081 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.951707 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.951752 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.951767 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.951784 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:14 crc kubenswrapper[4688]: I0123 18:08:14.951796 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:14Z","lastTransitionTime":"2026-01-23T18:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.054281 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.054322 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.054333 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.054349 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.054361 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:15Z","lastTransitionTime":"2026-01-23T18:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.157467 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.157519 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.157533 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.157552 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.157564 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:15Z","lastTransitionTime":"2026-01-23T18:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.260260 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.260309 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.260319 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.260340 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.260353 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:15Z","lastTransitionTime":"2026-01-23T18:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.355688 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.355741 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.355802 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:15 crc kubenswrapper[4688]: E0123 18:08:15.356496 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 18:08:15 crc kubenswrapper[4688]: E0123 18:08:15.356573 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 18:08:15 crc kubenswrapper[4688]: E0123 18:08:15.356648 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.362478 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.362532 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.362550 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.362572 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.362589 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:15Z","lastTransitionTime":"2026-01-23T18:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.465375 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.465432 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.465449 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.465474 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.465492 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:15Z","lastTransitionTime":"2026-01-23T18:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.501270 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:49:25.678566928 +0000 UTC Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.502006 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.502098 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.502118 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.502144 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.502166 4688 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T18:08:15Z","lastTransitionTime":"2026-01-23T18:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.581739 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99"] Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.582069 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.584617 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.584791 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.585536 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.586953 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.676135 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.676182 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.676268 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.676289 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.676318 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.777064 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.777110 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.777148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.777167 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.777212 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.777245 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.777311 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.778442 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.783832 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.800487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4abe95ae-3f90-49e9-9bd7-fdb07370ec58-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n8t99\" (UID: \"4abe95ae-3f90-49e9-9bd7-fdb07370ec58\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.894779 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" Jan 23 18:08:15 crc kubenswrapper[4688]: W0123 18:08:15.907853 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4abe95ae_3f90_49e9_9bd7_fdb07370ec58.slice/crio-3aa6e7c7a147d27ebf290be31ed04b5825543279bb9add146bca7add93b7c407 WatchSource:0}: Error finding container 3aa6e7c7a147d27ebf290be31ed04b5825543279bb9add146bca7add93b7c407: Status 404 returned error can't find the container with id 3aa6e7c7a147d27ebf290be31ed04b5825543279bb9add146bca7add93b7c407 Jan 23 18:08:15 crc kubenswrapper[4688]: I0123 18:08:15.994751 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" event={"ID":"4abe95ae-3f90-49e9-9bd7-fdb07370ec58","Type":"ContainerStarted","Data":"3aa6e7c7a147d27ebf290be31ed04b5825543279bb9add146bca7add93b7c407"} Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.288259 4688 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.288800 4688 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.338886 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7vr8"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.339429 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hqlrn"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.339820 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.340371 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.342004 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mrcbl"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.342856 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.348840 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.349424 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.349605 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.350387 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.350957 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.351129 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.351061 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.351014 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.351749 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.351863 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.352706 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.353422 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.353561 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.353743 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.354228 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.354331 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.354390 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.354421 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.354432 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.354547 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.355257 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.361332 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.361459 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.361570 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.361839 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.362878 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.363066 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.363753 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.365776 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.375976 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.377227 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.377407 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.377825 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.379244 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.379416 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.379584 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.379877 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.380956 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.388386 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.388455 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.388580 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.388661 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.388403 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.388471 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.395219 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.396377 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-55577"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.397856 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c848f78-3db5-42fe-a021-777411e9d5b6-audit-dir\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.397898 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.397930 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.397954 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.397979 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.398038 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-serving-cert\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.398075 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.398109 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-client-ca\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.398136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.398156 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/10c46862-d70f-445e-82a8-f76c17326a8b-images\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.398711 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399412 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399642 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-dir\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399678 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399706 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-serving-cert\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399730 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399774 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-client-ca\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399799 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-etcd-client\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399825 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkdb\" (UniqueName: \"kubernetes.io/projected/5c848f78-3db5-42fe-a021-777411e9d5b6-kube-api-access-tpkdb\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399850 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-audit-policies\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399882 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znbx7\" (UniqueName: \"kubernetes.io/projected/10c46862-d70f-445e-82a8-f76c17326a8b-kube-api-access-znbx7\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399903 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399930 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-policies\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399951 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-config\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399983 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-config\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.399991 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400006 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cncq4\" (UniqueName: \"kubernetes.io/projected/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-kube-api-access-cncq4\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400029 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40379f1a-aa94-41f2-aeb2-de63f0c78d68-serving-cert\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400059 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400087 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-encryption-config\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400113 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kssfq\" (UniqueName: \"kubernetes.io/projected/40379f1a-aa94-41f2-aeb2-de63f0c78d68-kube-api-access-kssfq\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400153 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c46862-d70f-445e-82a8-f76c17326a8b-config\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400215 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400251 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400272 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvjh\" (UniqueName: \"kubernetes.io/projected/23f88ea9-d4bc-4702-8561-0babb8fe52df-kube-api-access-cqvjh\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400294 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400316 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.400339 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c46862-d70f-445e-82a8-f76c17326a8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.401865 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.402089 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.408364 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.408585 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.408600 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.408930 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.418663 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.419495 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.420354 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.420489 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.423787 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.424730 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.424779 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.424940 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.425066 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.425504 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.427284 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8rxmx"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.427892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.428896 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-f29lx"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.429507 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.430646 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.430768 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.432372 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vv5s9"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.432899 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.433052 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.433955 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.434793 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9c7cd"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.435765 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.435864 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.436631 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.436811 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.437269 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.437424 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.437463 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.437514 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.437515 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.438206 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.438695 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7vr8"] Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.439308 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.440972 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.442258 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.442538 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.443256 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 18:08:16 crc kubenswrapper[4688]: I0123 18:08:16.444902 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:16.445351 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.107746 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.108043 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.110928 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.111298 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.112901 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:30:59.50462311 +0000 UTC Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.113015 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.119951 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c46862-d70f-445e-82a8-f76c17326a8b-config\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.120523 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.121755 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvjh\" (UniqueName: \"kubernetes.io/projected/23f88ea9-d4bc-4702-8561-0babb8fe52df-kube-api-access-cqvjh\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.121864 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.121913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.121945 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.122057 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c46862-d70f-445e-82a8-f76c17326a8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.122092 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c848f78-3db5-42fe-a021-777411e9d5b6-audit-dir\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.122123 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.122147 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.122199 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.122364 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.122524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-serving-cert\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.123370 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5c848f78-3db5-42fe-a021-777411e9d5b6-audit-dir\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.124399 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.128129 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.134776 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.134971 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-client-ca\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.135058 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.137590 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10c46862-d70f-445e-82a8-f76c17326a8b-config\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.146079 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.146388 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.151785 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.152064 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.152106 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.152259 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.152295 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.136519 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/10c46862-d70f-445e-82a8-f76c17326a8b-images\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.153114 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.151766 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.154529 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.156045 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.165528 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.177445 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.177517 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hqlrn"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.177725 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.152944 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.177895 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178113 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-dir\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178221 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178309 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-serving-cert\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178392 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178480 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-client-ca\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178557 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-etcd-client\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178623 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkdb\" (UniqueName: \"kubernetes.io/projected/5c848f78-3db5-42fe-a021-777411e9d5b6-kube-api-access-tpkdb\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178695 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-audit-policies\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178774 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znbx7\" (UniqueName: \"kubernetes.io/projected/10c46862-d70f-445e-82a8-f76c17326a8b-kube-api-access-znbx7\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178879 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178955 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-policies\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179029 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-config\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179106 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-config\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179178 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cncq4\" (UniqueName: \"kubernetes.io/projected/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-kube-api-access-cncq4\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179281 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40379f1a-aa94-41f2-aeb2-de63f0c78d68-serving-cert\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179365 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179418 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/10c46862-d70f-445e-82a8-f76c17326a8b-images\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179436 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-encryption-config\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179517 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kssfq\" (UniqueName: \"kubernetes.io/projected/40379f1a-aa94-41f2-aeb2-de63f0c78d68-kube-api-access-kssfq\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179725 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5c848f78-3db5-42fe-a021-777411e9d5b6-audit-policies\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179787 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-dir\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.179946 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-client-ca\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.180176 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-client-ca\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.178977 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mrcbl"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.185041 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.188984 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.189256 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.189499 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.189647 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.189683 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-config\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.191838 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.202352 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.202654 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.202364 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-config\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.202457 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.202922 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/10c46862-d70f-445e-82a8-f76c17326a8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.202988 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-serving-cert\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.203146 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.203440 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.198792 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.203727 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f29lx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.203798 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.204239 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vv5s9"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.204439 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.204672 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.205018 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.205069 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.205173 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.205493 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.205910 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.205991 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.206084 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.206321 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.209140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-policies\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.211402 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.212940 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.214772 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.215571 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.215663 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.225511 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kssfq\" (UniqueName: \"kubernetes.io/projected/40379f1a-aa94-41f2-aeb2-de63f0c78d68-kube-api-access-kssfq\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.226109 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40379f1a-aa94-41f2-aeb2-de63f0c78d68-serving-cert\") pod \"controller-manager-879f6c89f-hqlrn\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.228519 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cncq4\" (UniqueName: \"kubernetes.io/projected/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-kube-api-access-cncq4\") pod \"route-controller-manager-6576b87f9c-knmkq\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.228812 4688 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.228917 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znbx7\" (UniqueName: \"kubernetes.io/projected/10c46862-d70f-445e-82a8-f76c17326a8b-kube-api-access-znbx7\") pod \"machine-api-operator-5694c8668f-mrcbl\" (UID: \"10c46862-d70f-445e-82a8-f76c17326a8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.229287 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvjh\" (UniqueName: \"kubernetes.io/projected/23f88ea9-d4bc-4702-8561-0babb8fe52df-kube-api-access-cqvjh\") pod \"oauth-openshift-558db77b4-c7vr8\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.229828 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-encryption-config\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.229990 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-etcd-client\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.230001 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c848f78-3db5-42fe-a021-777411e9d5b6-serving-cert\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.231435 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9c7cd"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.232809 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8rxmx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.233367 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkdb\" (UniqueName: \"kubernetes.io/projected/5c848f78-3db5-42fe-a021-777411e9d5b6-kube-api-access-tpkdb\") pod \"apiserver-7bbb656c7d-gd24s\" (UID: \"5c848f78-3db5-42fe-a021-777411e9d5b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.234498 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-55577"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.234544 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.234720 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.235705 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.238306 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-svczn"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.239148 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.239589 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-svczn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.240558 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jhn49"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.241289 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6wxpp"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.241995 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.242007 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.242135 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.242404 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xxv4w"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.244221 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.244895 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.245296 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.249762 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.251065 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.252166 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.252427 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.252698 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.252916 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.253879 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.254082 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.254314 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.254653 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.254925 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.255242 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.255467 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.255669 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.255869 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.255942 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.256754 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nshhm"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.261505 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.261895 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.262117 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.262587 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.262225 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.262262 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.261650 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.262328 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.262345 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.263259 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.263297 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.263453 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.263595 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.263971 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.264673 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.268792 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.270145 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.270491 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.273535 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.273578 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.274418 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.279702 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.280167 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-oauth-serving-cert\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.280501 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.280547 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ace2e31-8536-4df1-aa9c-78dd8de6f170-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.281677 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.281710 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.282713 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287375 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-console-config\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287478 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k8zx\" (UniqueName: \"kubernetes.io/projected/4bc9750e-684a-4163-85c7-328d7a64ac9b-kube-api-access-7k8zx\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287510 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8fnr\" (UniqueName: \"kubernetes.io/projected/f3396880-cea3-401c-bcff-b9477770ead5-kube-api-access-t8fnr\") pod \"cluster-samples-operator-665b6dd947-9zmz8\" (UID: \"f3396880-cea3-401c-bcff-b9477770ead5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287563 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-config\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287606 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-image-import-ca\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287647 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxzgw\" (UniqueName: \"kubernetes.io/projected/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-kube-api-access-xxzgw\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287672 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wch2b\" (UniqueName: \"kubernetes.io/projected/f9fd8784-aa6e-486b-98a6-cc9536032892-kube-api-access-wch2b\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287730 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9fd8784-aa6e-486b-98a6-cc9536032892-audit-dir\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287754 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ace2e31-8536-4df1-aa9c-78dd8de6f170-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287808 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-config\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287840 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-config\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287862 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-serving-cert\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287884 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2bp\" (UniqueName: \"kubernetes.io/projected/1ace2e31-8536-4df1-aa9c-78dd8de6f170-kube-api-access-pq2bp\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287915 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-oauth-config\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287961 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/155aa9e8-9113-4458-b240-d82a31701801-machine-approver-tls\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.287986 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-audit\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288033 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-serving-cert\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288062 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288088 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-encryption-config\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/155aa9e8-9113-4458-b240-d82a31701801-auth-proxy-config\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288151 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5k4t\" (UniqueName: \"kubernetes.io/projected/155aa9e8-9113-4458-b240-d82a31701801-kube-api-access-l5k4t\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288178 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4bc9750e-684a-4163-85c7-328d7a64ac9b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288281 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9fd8784-aa6e-486b-98a6-cc9536032892-node-pullsecrets\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288319 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288364 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nmp\" (UniqueName: \"kubernetes.io/projected/326fe39a-476f-411d-aa4e-e2a44c68c841-kube-api-access-z9nmp\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-service-ca-bundle\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288447 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3396880-cea3-401c-bcff-b9477770ead5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9zmz8\" (UID: \"f3396880-cea3-401c-bcff-b9477770ead5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288474 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/155aa9e8-9113-4458-b240-d82a31701801-config\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288503 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-trusted-ca-bundle\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288537 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwgp\" (UniqueName: \"kubernetes.io/projected/a65ef93e-9a84-4907-84e4-fcf7248bba7d-kube-api-access-lpwgp\") pod \"downloads-7954f5f757-8rxmx\" (UID: \"a65ef93e-9a84-4907-84e4-fcf7248bba7d\") " pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288570 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/326fe39a-476f-411d-aa4e-e2a44c68c841-serving-cert\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288597 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vcrd\" (UniqueName: \"kubernetes.io/projected/d4a321be-034e-49be-bcb8-114be9ecc457-kube-api-access-9vcrd\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288620 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-etcd-client\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288645 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-etcd-serving-ca\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288679 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-serving-cert\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288702 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-service-ca\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288724 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc9750e-684a-4163-85c7-328d7a64ac9b-serving-cert\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288771 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288802 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-trusted-ca\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288828 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkznc\" (UniqueName: \"kubernetes.io/projected/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-kube-api-access-lkznc\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.288852 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.289420 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.290198 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.290756 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.291090 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.291793 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kjslx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.292790 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.293758 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k6fl6"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.294338 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.294989 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.295643 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.296107 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.296704 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.297750 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vjdm"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.298244 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.298847 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.300133 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.300952 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.301201 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.301212 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.301639 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.302290 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.302443 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.302846 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.303166 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jkttk"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.303546 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.304248 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-svczn"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.305481 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.306334 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.306988 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.308085 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xnkg6"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.308509 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.309148 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.310851 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-twvtx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.311706 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.311831 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.313178 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6wxpp"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.314296 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.315340 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.316787 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kjslx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.317742 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.318662 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.318783 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xxv4w"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.320643 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.322074 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.322276 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vjdm"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.324027 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.324519 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.326098 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jhn49"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.327877 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.336128 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.336220 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k6fl6"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.345931 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.348690 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-twvtx"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.350528 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.352205 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.353825 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.354710 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.355358 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.356041 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.356387 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.362129 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.366269 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.366316 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.366329 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xnkg6"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.366340 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.383270 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391164 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxzgw\" (UniqueName: \"kubernetes.io/projected/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-kube-api-access-xxzgw\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391240 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wch2b\" (UniqueName: \"kubernetes.io/projected/f9fd8784-aa6e-486b-98a6-cc9536032892-kube-api-access-wch2b\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391273 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krh6q\" (UniqueName: \"kubernetes.io/projected/41317431-17cf-46e5-997e-afcc7b8d01e3-kube-api-access-krh6q\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391298 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-metrics-certs\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391315 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm4qp\" (UniqueName: \"kubernetes.io/projected/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-kube-api-access-tm4qp\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391331 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-service-ca\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391351 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9fd8784-aa6e-486b-98a6-cc9536032892-audit-dir\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391373 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ace2e31-8536-4df1-aa9c-78dd8de6f170-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391408 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-config\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391435 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-config\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391501 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-serving-cert\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391558 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq2bp\" (UniqueName: \"kubernetes.io/projected/1ace2e31-8536-4df1-aa9c-78dd8de6f170-kube-api-access-pq2bp\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391592 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n95f\" (UniqueName: \"kubernetes.io/projected/ba751b5a-7f01-46b9-9734-56d19059f727-kube-api-access-6n95f\") pod \"dns-operator-744455d44c-jhn49\" (UID: \"ba751b5a-7f01-46b9-9734-56d19059f727\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391628 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-oauth-config\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391705 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cd9307-1c77-45f6-94c5-b27f7542281b-trusted-ca\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391734 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-client\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391776 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/155aa9e8-9113-4458-b240-d82a31701801-machine-approver-tls\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391802 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-audit\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.391830 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-stats-auth\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.392964 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ace2e31-8536-4df1-aa9c-78dd8de6f170-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.393073 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-serving-cert\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.393176 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.393240 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-encryption-config\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.393266 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba751b5a-7f01-46b9-9734-56d19059f727-metrics-tls\") pod \"dns-operator-744455d44c-jhn49\" (UID: \"ba751b5a-7f01-46b9-9734-56d19059f727\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.393329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9fd8784-aa6e-486b-98a6-cc9536032892-audit-dir\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.393899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-config\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394077 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-audit\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394116 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/155aa9e8-9113-4458-b240-d82a31701801-auth-proxy-config\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394225 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a4f2b0b-8d76-4871-8197-1c12a79726e3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394263 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13cd9307-1c77-45f6-94c5-b27f7542281b-metrics-tls\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394300 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5k4t\" (UniqueName: \"kubernetes.io/projected/155aa9e8-9113-4458-b240-d82a31701801-kube-api-access-l5k4t\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394325 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4bc9750e-684a-4163-85c7-328d7a64ac9b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394347 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9fd8784-aa6e-486b-98a6-cc9536032892-node-pullsecrets\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394379 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4f2b0b-8d76-4871-8197-1c12a79726e3-config\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394428 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394477 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9fd8784-aa6e-486b-98a6-cc9536032892-node-pullsecrets\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394489 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9nmp\" (UniqueName: \"kubernetes.io/projected/326fe39a-476f-411d-aa4e-e2a44c68c841-kube-api-access-z9nmp\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394548 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-config\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394586 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-service-ca-bundle\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394619 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3396880-cea3-401c-bcff-b9477770ead5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9zmz8\" (UID: \"f3396880-cea3-401c-bcff-b9477770ead5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394661 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/155aa9e8-9113-4458-b240-d82a31701801-config\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-trusted-ca-bundle\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394723 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwgp\" (UniqueName: \"kubernetes.io/projected/a65ef93e-9a84-4907-84e4-fcf7248bba7d-kube-api-access-lpwgp\") pod \"downloads-7954f5f757-8rxmx\" (UID: \"a65ef93e-9a84-4907-84e4-fcf7248bba7d\") " pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cd9307-1c77-45f6-94c5-b27f7542281b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394770 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/326fe39a-476f-411d-aa4e-e2a44c68c841-serving-cert\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394791 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a4f2b0b-8d76-4871-8197-1c12a79726e3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vcrd\" (UniqueName: \"kubernetes.io/projected/d4a321be-034e-49be-bcb8-114be9ecc457-kube-api-access-9vcrd\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394847 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4bc9750e-684a-4163-85c7-328d7a64ac9b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394865 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-etcd-client\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394896 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-service-ca-bundle\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394951 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-etcd-serving-ca\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394989 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-serving-cert\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395017 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-service-ca\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395042 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc9750e-684a-4163-85c7-328d7a64ac9b-serving-cert\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395067 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41317431-17cf-46e5-997e-afcc7b8d01e3-serving-cert\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395096 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-trusted-ca\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395151 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkznc\" (UniqueName: \"kubernetes.io/projected/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-kube-api-access-lkznc\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395174 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.396979 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-ca\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-trusted-ca\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.394549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.396651 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/155aa9e8-9113-4458-b240-d82a31701801-auth-proxy-config\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.396911 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-service-ca\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397014 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-etcd-serving-ca\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397305 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-oauth-serving-cert\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397460 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-default-certificate\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397492 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397516 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ace2e31-8536-4df1-aa9c-78dd8de6f170-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397556 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-console-config\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397582 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k8zx\" (UniqueName: \"kubernetes.io/projected/4bc9750e-684a-4163-85c7-328d7a64ac9b-kube-api-access-7k8zx\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397603 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8fnr\" (UniqueName: \"kubernetes.io/projected/f3396880-cea3-401c-bcff-b9477770ead5-kube-api-access-t8fnr\") pod \"cluster-samples-operator-665b6dd947-9zmz8\" (UID: \"f3396880-cea3-401c-bcff-b9477770ead5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397629 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b6j5\" (UniqueName: \"kubernetes.io/projected/13cd9307-1c77-45f6-94c5-b27f7542281b-kube-api-access-8b6j5\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397652 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-config\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397673 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-image-import-ca\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.398060 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-oauth-serving-cert\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.396087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-trusted-ca-bundle\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.398089 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.398570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/155aa9e8-9113-4458-b240-d82a31701801-config\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.398660 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9fd8784-aa6e-486b-98a6-cc9536032892-image-import-ca\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.398777 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-config\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.395750 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.397324 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326fe39a-476f-411d-aa4e-e2a44c68c841-service-ca-bundle\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.399434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-console-config\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.402447 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-serving-cert\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.402512 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-config\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.402616 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-etcd-client\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.402732 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9fd8784-aa6e-486b-98a6-cc9536032892-encryption-config\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.403432 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/155aa9e8-9113-4458-b240-d82a31701801-machine-approver-tls\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.404301 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.406821 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.407010 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/326fe39a-476f-411d-aa4e-e2a44c68c841-serving-cert\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.407593 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ace2e31-8536-4df1-aa9c-78dd8de6f170-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.407861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-oauth-config\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.407938 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-serving-cert\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.410140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3396880-cea3-401c-bcff-b9477770ead5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9zmz8\" (UID: \"f3396880-cea3-401c-bcff-b9477770ead5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.411478 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bc9750e-684a-4163-85c7-328d7a64ac9b-serving-cert\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.411693 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-serving-cert\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.419856 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.422578 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.451399 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.464048 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.483633 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.498798 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41317431-17cf-46e5-997e-afcc7b8d01e3-serving-cert\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.498863 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-default-certificate\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.498879 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-ca\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.498921 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b6j5\" (UniqueName: \"kubernetes.io/projected/13cd9307-1c77-45f6-94c5-b27f7542281b-kube-api-access-8b6j5\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.498975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krh6q\" (UniqueName: \"kubernetes.io/projected/41317431-17cf-46e5-997e-afcc7b8d01e3-kube-api-access-krh6q\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.498995 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-metrics-certs\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499011 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm4qp\" (UniqueName: \"kubernetes.io/projected/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-kube-api-access-tm4qp\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-service-ca\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499058 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n95f\" (UniqueName: \"kubernetes.io/projected/ba751b5a-7f01-46b9-9734-56d19059f727-kube-api-access-6n95f\") pod \"dns-operator-744455d44c-jhn49\" (UID: \"ba751b5a-7f01-46b9-9734-56d19059f727\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499084 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cd9307-1c77-45f6-94c5-b27f7542281b-trusted-ca\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499118 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-client\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499139 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-stats-auth\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499160 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba751b5a-7f01-46b9-9734-56d19059f727-metrics-tls\") pod \"dns-operator-744455d44c-jhn49\" (UID: \"ba751b5a-7f01-46b9-9734-56d19059f727\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499177 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13cd9307-1c77-45f6-94c5-b27f7542281b-metrics-tls\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499251 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a4f2b0b-8d76-4871-8197-1c12a79726e3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499274 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4f2b0b-8d76-4871-8197-1c12a79726e3-config\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499309 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-config\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499333 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cd9307-1c77-45f6-94c5-b27f7542281b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a4f2b0b-8d76-4871-8197-1c12a79726e3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.499372 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-service-ca-bundle\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.500175 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-ca\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.500295 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-service-ca-bundle\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.500458 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4f2b0b-8d76-4871-8197-1c12a79726e3-config\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.501078 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-config\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.501149 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-service-ca\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.502380 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13cd9307-1c77-45f6-94c5-b27f7542281b-trusted-ca\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.503624 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-metrics-certs\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.503652 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-stats-auth\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.503679 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41317431-17cf-46e5-997e-afcc7b8d01e3-serving-cert\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.503607 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-default-certificate\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.504766 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41317431-17cf-46e5-997e-afcc7b8d01e3-etcd-client\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.506289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13cd9307-1c77-45f6-94c5-b27f7542281b-metrics-tls\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.507902 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a4f2b0b-8d76-4871-8197-1c12a79726e3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.509307 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba751b5a-7f01-46b9-9734-56d19059f727-metrics-tls\") pod \"dns-operator-744455d44c-jhn49\" (UID: \"ba751b5a-7f01-46b9-9734-56d19059f727\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.521823 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.543088 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.561545 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.567352 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7vr8"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.581857 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.602422 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.622777 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.641417 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.643072 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.662319 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.683495 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.702510 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.722171 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.742649 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.762696 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.782348 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.823498 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.843629 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.843842 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.844849 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hqlrn"] Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.845987 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mrcbl"] Jan 23 18:08:17 crc kubenswrapper[4688]: W0123 18:08:17.855361 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10c46862_d70f_445e_82a8_f76c17326a8b.slice/crio-fc0f3491c291d100a5e793a77ffa9b6f3c72b5526aad5bfa635191057ca9f14b WatchSource:0}: Error finding container fc0f3491c291d100a5e793a77ffa9b6f3c72b5526aad5bfa635191057ca9f14b: Status 404 returned error can't find the container with id fc0f3491c291d100a5e793a77ffa9b6f3c72b5526aad5bfa635191057ca9f14b Jan 23 18:08:17 crc kubenswrapper[4688]: W0123 18:08:17.857497 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40379f1a_aa94_41f2_aeb2_de63f0c78d68.slice/crio-b60b771de6fe0860615cf061d99d1159410993df07058f905a58e803ac8a19d3 WatchSource:0}: Error finding container b60b771de6fe0860615cf061d99d1159410993df07058f905a58e803ac8a19d3: Status 404 returned error can't find the container with id b60b771de6fe0860615cf061d99d1159410993df07058f905a58e803ac8a19d3 Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.862638 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.882661 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.903512 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.923900 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.943156 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.962363 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 18:08:17 crc kubenswrapper[4688]: I0123 18:08:17.982910 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.002316 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.022135 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.043366 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.061966 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.088381 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.104611 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.123017 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.143012 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.162887 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.182914 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.203320 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.217218 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" event={"ID":"23f88ea9-d4bc-4702-8561-0babb8fe52df","Type":"ContainerStarted","Data":"5f401395323b3483e48895cd8d5dc22e44620b4c2c9172ceb717d21912959837"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.217268 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" event={"ID":"23f88ea9-d4bc-4702-8561-0babb8fe52df","Type":"ContainerStarted","Data":"a3dd9eca58137ccc024d1504b296ed0ec0929446646aea26c4d897864cb3cb8b"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.217537 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.218582 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" event={"ID":"40379f1a-aa94-41f2-aeb2-de63f0c78d68","Type":"ContainerStarted","Data":"d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.218610 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" event={"ID":"40379f1a-aa94-41f2-aeb2-de63f0c78d68","Type":"ContainerStarted","Data":"b60b771de6fe0860615cf061d99d1159410993df07058f905a58e803ac8a19d3"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.219062 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.220066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" event={"ID":"4abe95ae-3f90-49e9-9bd7-fdb07370ec58","Type":"ContainerStarted","Data":"02e07f15626111f66a867ae6bab535da6c62aac9d4df14317fbb209538346007"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.220398 4688 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hqlrn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.220439 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" podUID="40379f1a-aa94-41f2-aeb2-de63f0c78d68" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.221952 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c848f78-3db5-42fe-a021-777411e9d5b6" containerID="160f3cbb3fed1134bd7209963a76c0a17350f8bb353191605bcd1bcf1871bd01" exitCode=0 Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.222085 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.222277 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" event={"ID":"5c848f78-3db5-42fe-a021-777411e9d5b6","Type":"ContainerDied","Data":"160f3cbb3fed1134bd7209963a76c0a17350f8bb353191605bcd1bcf1871bd01"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.222305 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" event={"ID":"5c848f78-3db5-42fe-a021-777411e9d5b6","Type":"ContainerStarted","Data":"d911b5dadad026e4684d8efcf93fa53a14530b214087f8ccb57563c329c3edea"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.224283 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" event={"ID":"10c46862-d70f-445e-82a8-f76c17326a8b","Type":"ContainerStarted","Data":"7df4ed42c23e9b47ab785d1a133b2752e7d886c69421c9079c5b1d7a54dcab4e"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.224319 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" event={"ID":"10c46862-d70f-445e-82a8-f76c17326a8b","Type":"ContainerStarted","Data":"a488200f304d6877e620915aa9ac7c25d64b5b1ad36bb20c37a155a2b5e4968a"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.224333 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" event={"ID":"10c46862-d70f-445e-82a8-f76c17326a8b","Type":"ContainerStarted","Data":"fc0f3491c291d100a5e793a77ffa9b6f3c72b5526aad5bfa635191057ca9f14b"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.227232 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" event={"ID":"21f38108-a9e5-4b3e-84a6-ad3e5152b1be","Type":"ContainerStarted","Data":"4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.227265 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" event={"ID":"21f38108-a9e5-4b3e-84a6-ad3e5152b1be","Type":"ContainerStarted","Data":"3d95a80a8d1edbd623b1b23069a70fa095b55eeb742c36066eb4fff67e23111d"} Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.227535 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.242519 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.262210 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.281763 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.300933 4688 request.go:700] Waited for 1.002333623s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.302289 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.322155 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.342427 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.362411 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.381911 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.402250 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.422017 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.426612 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.442949 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.462677 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.495342 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.502150 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.524661 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.542225 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.563759 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.575352 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.581947 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.603567 4688 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.622400 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.642916 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.662127 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.681952 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.701993 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.722149 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.742591 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.762267 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.782589 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.802652 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.847726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxzgw\" (UniqueName: \"kubernetes.io/projected/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-kube-api-access-xxzgw\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.859304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq2bp\" (UniqueName: \"kubernetes.io/projected/1ace2e31-8536-4df1-aa9c-78dd8de6f170-kube-api-access-pq2bp\") pod \"openshift-apiserver-operator-796bbdcf4f-wptf8\" (UID: \"1ace2e31-8536-4df1-aa9c-78dd8de6f170\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.877022 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wch2b\" (UniqueName: \"kubernetes.io/projected/f9fd8784-aa6e-486b-98a6-cc9536032892-kube-api-access-wch2b\") pod \"apiserver-76f77b778f-9c7cd\" (UID: \"f9fd8784-aa6e-486b-98a6-cc9536032892\") " pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.898132 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5k4t\" (UniqueName: \"kubernetes.io/projected/155aa9e8-9113-4458-b240-d82a31701801-kube-api-access-l5k4t\") pod \"machine-approver-56656f9798-k4cn5\" (UID: \"155aa9e8-9113-4458-b240-d82a31701801\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.917758 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9nmp\" (UniqueName: \"kubernetes.io/projected/326fe39a-476f-411d-aa4e-e2a44c68c841-kube-api-access-z9nmp\") pod \"authentication-operator-69f744f599-55577\" (UID: \"326fe39a-476f-411d-aa4e-e2a44c68c841\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.941506 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwgp\" (UniqueName: \"kubernetes.io/projected/a65ef93e-9a84-4907-84e4-fcf7248bba7d-kube-api-access-lpwgp\") pod \"downloads-7954f5f757-8rxmx\" (UID: \"a65ef93e-9a84-4907-84e4-fcf7248bba7d\") " pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.957018 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkznc\" (UniqueName: \"kubernetes.io/projected/e57d4885-6ec2-4f22-9b2b-548a2ca15c99-kube-api-access-lkznc\") pod \"console-operator-58897d9998-vv5s9\" (UID: \"e57d4885-6ec2-4f22-9b2b-548a2ca15c99\") " pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.977260 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vcrd\" (UniqueName: \"kubernetes.io/projected/d4a321be-034e-49be-bcb8-114be9ecc457-kube-api-access-9vcrd\") pod \"console-f9d7485db-f29lx\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:18 crc kubenswrapper[4688]: I0123 18:08:18.997453 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8fnr\" (UniqueName: \"kubernetes.io/projected/f3396880-cea3-401c-bcff-b9477770ead5-kube-api-access-t8fnr\") pod \"cluster-samples-operator-665b6dd947-9zmz8\" (UID: \"f3396880-cea3-401c-bcff-b9477770ead5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.002338 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.015088 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hp5vq\" (UID: \"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.029730 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.036535 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k8zx\" (UniqueName: \"kubernetes.io/projected/4bc9750e-684a-4163-85c7-328d7a64ac9b-kube-api-access-7k8zx\") pod \"openshift-config-operator-7777fb866f-4m5tx\" (UID: \"4bc9750e-684a-4163-85c7-328d7a64ac9b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:19 crc kubenswrapper[4688]: W0123 18:08:19.041956 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod155aa9e8_9113_4458_b240_d82a31701801.slice/crio-4dc2610ee642bca0308a6e28230ce2b19530cdae119c4fad063f4ca3431f655f WatchSource:0}: Error finding container 4dc2610ee642bca0308a6e28230ce2b19530cdae119c4fad063f4ca3431f655f: Status 404 returned error can't find the container with id 4dc2610ee642bca0308a6e28230ce2b19530cdae119c4fad063f4ca3431f655f Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.055399 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b6j5\" (UniqueName: \"kubernetes.io/projected/13cd9307-1c77-45f6-94c5-b27f7542281b-kube-api-access-8b6j5\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.061070 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.071763 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.080035 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krh6q\" (UniqueName: \"kubernetes.io/projected/41317431-17cf-46e5-997e-afcc7b8d01e3-kube-api-access-krh6q\") pod \"etcd-operator-b45778765-xxv4w\" (UID: \"41317431-17cf-46e5-997e-afcc7b8d01e3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.085887 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.098630 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.101482 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n95f\" (UniqueName: \"kubernetes.io/projected/ba751b5a-7f01-46b9-9734-56d19059f727-kube-api-access-6n95f\") pod \"dns-operator-744455d44c-jhn49\" (UID: \"ba751b5a-7f01-46b9-9734-56d19059f727\") " pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.109782 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.116789 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm4qp\" (UniqueName: \"kubernetes.io/projected/44b10f0a-1d4c-4d21-9c48-d08b3e18786e-kube-api-access-tm4qp\") pod \"router-default-5444994796-nshhm\" (UID: \"44b10f0a-1d4c-4d21-9c48-d08b3e18786e\") " pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.117370 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.128630 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.137770 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a4f2b0b-8d76-4871-8197-1c12a79726e3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t26qj\" (UID: \"1a4f2b0b-8d76-4871-8197-1c12a79726e3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.161879 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.162199 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13cd9307-1c77-45f6-94c5-b27f7542281b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4kq47\" (UID: \"13cd9307-1c77-45f6-94c5-b27f7542281b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.209979 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tgn\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-kube-api-access-q7tgn\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220096 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79605b45-524f-433a-88f6-8b7ab42c85e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220147 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-config-volume\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220256 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4297e801-77fd-43f7-ba12-4b620088a5d2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220278 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79605b45-524f-433a-88f6-8b7ab42c85e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220337 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72866af2-cf21-4ff1-bff0-a750c155801d-images\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41670363-2317-44f9-82cf-e459e23cc97e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220406 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4297e801-77fd-43f7-ba12-4b620088a5d2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220433 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmls\" (UniqueName: \"kubernetes.io/projected/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-kube-api-access-7xmls\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220482 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjd44\" (UniqueName: \"kubernetes.io/projected/7b5b3930-a465-4c33-8efe-273fd9f7ca59-kube-api-access-rjd44\") pod \"migrator-59844c95c7-6zkjn\" (UID: \"7b5b3930-a465-4c33-8efe-273fd9f7ca59\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220530 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-trusted-ca\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220554 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cec23b4-8312-4b09-b9ea-b93202b96afd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220588 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72866af2-cf21-4ff1-bff0-a750c155801d-proxy-tls\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220615 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-bound-sa-token\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220669 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41670363-2317-44f9-82cf-e459e23cc97e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220705 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjdh\" (UniqueName: \"kubernetes.io/projected/9cec23b4-8312-4b09-b9ea-b93202b96afd-kube-api-access-hjjdh\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220766 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4297e801-77fd-43f7-ba12-4b620088a5d2-config\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gj9w\" (UniqueName: \"kubernetes.io/projected/72866af2-cf21-4ff1-bff0-a750c155801d-kube-api-access-9gj9w\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220810 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-registry-certificates\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220825 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-metrics-tls\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220839 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cec23b4-8312-4b09-b9ea-b93202b96afd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220878 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79605b45-524f-433a-88f6-8b7ab42c85e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220909 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-registry-tls\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.220941 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72866af2-cf21-4ff1-bff0-a750c155801d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.226638 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:19.72662459 +0000 UTC m=+94.722449031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.241733 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.242240 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.275657 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.279841 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.280977 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" event={"ID":"5c848f78-3db5-42fe-a021-777411e9d5b6","Type":"ContainerStarted","Data":"4d86f9e3311483ca261cf2ef6c4f7d92f293666b34f15032407e65aac192b57b"} Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.282935 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" event={"ID":"155aa9e8-9113-4458-b240-d82a31701801","Type":"ContainerStarted","Data":"4dc2610ee642bca0308a6e28230ce2b19530cdae119c4fad063f4ca3431f655f"} Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.310406 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-55577"] Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.319614 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325419 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325605 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m868l\" (UniqueName: \"kubernetes.io/projected/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-kube-api-access-m868l\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325642 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmz6\" (UniqueName: \"kubernetes.io/projected/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-kube-api-access-cqmz6\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4297e801-77fd-43f7-ba12-4b620088a5d2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325719 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325755 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0143da39-695c-4027-98ff-b91a5b87777c-node-bootstrap-token\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325774 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f885d7f-713d-48bd-b80a-51807d564fff-apiservice-cert\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79605b45-524f-433a-88f6-8b7ab42c85e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7284a1ab-8a12-4cae-89f6-f1da071d6cce-cert\") pod \"ingress-canary-twvtx\" (UID: \"7284a1ab-8a12-4cae-89f6-f1da071d6cce\") " pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325848 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-socket-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325868 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325896 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-csi-data-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325934 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72866af2-cf21-4ff1-bff0-a750c155801d-images\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325951 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325972 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvx2\" (UniqueName: \"kubernetes.io/projected/9161065b-30e0-4eea-b615-829617fe9b26-kube-api-access-8nvx2\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.325990 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0143da39-695c-4027-98ff-b91a5b87777c-certs\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326008 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jw6\" (UniqueName: \"kubernetes.io/projected/1f885d7f-713d-48bd-b80a-51807d564fff-kube-api-access-57jw6\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326028 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41670363-2317-44f9-82cf-e459e23cc97e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326047 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/71ab11f7-6719-4e2a-8993-4c7eed4d51c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s7wj5\" (UID: \"71ab11f7-6719-4e2a-8993-4c7eed4d51c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326098 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f885d7f-713d-48bd-b80a-51807d564fff-tmpfs\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326134 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4297e801-77fd-43f7-ba12-4b620088a5d2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxt5\" (UniqueName: \"kubernetes.io/projected/0143da39-695c-4027-98ff-b91a5b87777c-kube-api-access-wkxt5\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326199 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dd2dac7-df42-40f3-8944-213d34513bc9-config\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326252 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmls\" (UniqueName: \"kubernetes.io/projected/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-kube-api-access-7xmls\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326269 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpr4\" (UniqueName: \"kubernetes.io/projected/fd81314a-84fb-4f6d-92f3-de71c92238d9-kube-api-access-9cpr4\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326290 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kw9\" (UniqueName: \"kubernetes.io/projected/edbce9b5-49b4-466d-b96b-dd40e492ede6-kube-api-access-k6kw9\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326311 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whtmj\" (UniqueName: \"kubernetes.io/projected/a00335ed-4674-4448-b37b-b71713264800-kube-api-access-whtmj\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326338 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9161065b-30e0-4eea-b615-829617fe9b26-config-volume\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326354 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7q4k\" (UniqueName: \"kubernetes.io/projected/71ab11f7-6719-4e2a-8993-4c7eed4d51c3-kube-api-access-r7q4k\") pod \"package-server-manager-789f6589d5-s7wj5\" (UID: \"71ab11f7-6719-4e2a-8993-4c7eed4d51c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.326482 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:19.826464588 +0000 UTC m=+94.822289029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.326982 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-srv-cert\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327069 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327092 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd44\" (UniqueName: \"kubernetes.io/projected/7b5b3930-a465-4c33-8efe-273fd9f7ca59-kube-api-access-rjd44\") pod \"migrator-59844c95c7-6zkjn\" (UID: \"7b5b3930-a465-4c33-8efe-273fd9f7ca59\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327114 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-trusted-ca\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327134 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7t6\" (UniqueName: \"kubernetes.io/projected/3dd2dac7-df42-40f3-8944-213d34513bc9-kube-api-access-rc7t6\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327174 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l62w\" (UniqueName: \"kubernetes.io/projected/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-kube-api-access-8l62w\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327233 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cec23b4-8312-4b09-b9ea-b93202b96afd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327254 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-signing-key\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327307 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72866af2-cf21-4ff1-bff0-a750c155801d-proxy-tls\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327326 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcgg\" (UniqueName: \"kubernetes.io/projected/516f90dd-64de-4a63-8420-0c963c358692-kube-api-access-5zcgg\") pod \"multus-admission-controller-857f4d67dd-kjslx\" (UID: \"516f90dd-64de-4a63-8420-0c963c358692\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327344 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9161065b-30e0-4eea-b615-829617fe9b26-secret-volume\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327373 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-mountpoint-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327422 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-bound-sa-token\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327441 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-profile-collector-cert\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327474 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd2dac7-df42-40f3-8944-213d34513bc9-serving-cert\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327525 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41670363-2317-44f9-82cf-e459e23cc97e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327548 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjdh\" (UniqueName: \"kubernetes.io/projected/9cec23b4-8312-4b09-b9ea-b93202b96afd-kube-api-access-hjjdh\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327588 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a00335ed-4674-4448-b37b-b71713264800-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327632 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4297e801-77fd-43f7-ba12-4b620088a5d2-config\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327652 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gj9w\" (UniqueName: \"kubernetes.io/projected/72866af2-cf21-4ff1-bff0-a750c155801d-kube-api-access-9gj9w\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327682 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-registry-certificates\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-metrics-tls\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327719 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-signing-cabundle\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327738 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cec23b4-8312-4b09-b9ea-b93202b96afd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327757 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-registration-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327786 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79605b45-524f-433a-88f6-8b7ab42c85e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327834 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-registry-tls\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327854 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvs5\" (UniqueName: \"kubernetes.io/projected/4203f041-a5af-47a8-999b-329b617fe415-kube-api-access-kwvs5\") pod \"control-plane-machine-set-operator-78cbb6b69f-hdshg\" (UID: \"4203f041-a5af-47a8-999b-329b617fe415\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327874 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72866af2-cf21-4ff1-bff0-a750c155801d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327895 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327913 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/edbce9b5-49b4-466d-b96b-dd40e492ede6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327935 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4203f041-a5af-47a8-999b-329b617fe415-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hdshg\" (UID: \"4203f041-a5af-47a8-999b-329b617fe415\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327963 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-plugins-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327982 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7tgn\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-kube-api-access-q7tgn\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.327999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/edbce9b5-49b4-466d-b96b-dd40e492ede6-proxy-tls\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.328017 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzz66\" (UniqueName: \"kubernetes.io/projected/48574a66-36e9-4915-a747-5ad9e653d135-kube-api-access-pzz66\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.328047 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/516f90dd-64de-4a63-8420-0c963c358692-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kjslx\" (UID: \"516f90dd-64de-4a63-8420-0c963c358692\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.328079 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79605b45-524f-433a-88f6-8b7ab42c85e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.328111 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a00335ed-4674-4448-b37b-b71713264800-srv-cert\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.328132 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-config-volume\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.328150 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpdz2\" (UniqueName: \"kubernetes.io/projected/7284a1ab-8a12-4cae-89f6-f1da071d6cce-kube-api-access-zpdz2\") pod \"ingress-canary-twvtx\" (UID: \"7284a1ab-8a12-4cae-89f6-f1da071d6cce\") " pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.328169 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f885d7f-713d-48bd-b80a-51807d564fff-webhook-cert\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.332734 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79605b45-524f-433a-88f6-8b7ab42c85e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.333807 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:19.833797623 +0000 UTC m=+94.829622064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.335065 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-trusted-ca\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.340606 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/72866af2-cf21-4ff1-bff0-a750c155801d-images\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.340930 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41670363-2317-44f9-82cf-e459e23cc97e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.341509 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cec23b4-8312-4b09-b9ea-b93202b96afd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.493684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4297e801-77fd-43f7-ba12-4b620088a5d2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.496108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-metrics-tls\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.496679 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cec23b4-8312-4b09-b9ea-b93202b96afd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.501582 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-config-volume\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.504688 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.506394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0143da39-695c-4027-98ff-b91a5b87777c-node-bootstrap-token\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.506437 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f885d7f-713d-48bd-b80a-51807d564fff-apiservice-cert\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.514770 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/72866af2-cf21-4ff1-bff0-a750c155801d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.522957 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7284a1ab-8a12-4cae-89f6-f1da071d6cce-cert\") pod \"ingress-canary-twvtx\" (UID: \"7284a1ab-8a12-4cae-89f6-f1da071d6cce\") " pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.526370 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4297e801-77fd-43f7-ba12-4b620088a5d2-config\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.530982 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-socket-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.531060 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.531681 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-registry-certificates\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.531787 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-socket-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.531880 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.531969 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvx2\" (UniqueName: \"kubernetes.io/projected/9161065b-30e0-4eea-b615-829617fe9b26-kube-api-access-8nvx2\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.531993 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-csi-data-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.532095 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-csi-data-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.532870 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0143da39-695c-4027-98ff-b91a5b87777c-certs\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.533035 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57jw6\" (UniqueName: \"kubernetes.io/projected/1f885d7f-713d-48bd-b80a-51807d564fff-kube-api-access-57jw6\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.533598 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/71ab11f7-6719-4e2a-8993-4c7eed4d51c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s7wj5\" (UID: \"71ab11f7-6719-4e2a-8993-4c7eed4d51c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.543102 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.544795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/72866af2-cf21-4ff1-bff0-a750c155801d-proxy-tls\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.545615 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f885d7f-713d-48bd-b80a-51807d564fff-tmpfs\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.546002 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1f885d7f-713d-48bd-b80a-51807d564fff-tmpfs\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.546144 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.046116475 +0000 UTC m=+95.041940996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.546208 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxt5\" (UniqueName: \"kubernetes.io/projected/0143da39-695c-4027-98ff-b91a5b87777c-kube-api-access-wkxt5\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.546641 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dd2dac7-df42-40f3-8944-213d34513bc9-config\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.547701 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cpr4\" (UniqueName: \"kubernetes.io/projected/fd81314a-84fb-4f6d-92f3-de71c92238d9-kube-api-access-9cpr4\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.547793 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6kw9\" (UniqueName: \"kubernetes.io/projected/edbce9b5-49b4-466d-b96b-dd40e492ede6-kube-api-access-k6kw9\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.547829 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whtmj\" (UniqueName: \"kubernetes.io/projected/a00335ed-4674-4448-b37b-b71713264800-kube-api-access-whtmj\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.547985 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9161065b-30e0-4eea-b615-829617fe9b26-config-volume\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.548021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7q4k\" (UniqueName: \"kubernetes.io/projected/71ab11f7-6719-4e2a-8993-4c7eed4d51c3-kube-api-access-r7q4k\") pod \"package-server-manager-789f6589d5-s7wj5\" (UID: \"71ab11f7-6719-4e2a-8993-4c7eed4d51c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.550300 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-srv-cert\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.550486 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.550519 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7t6\" (UniqueName: \"kubernetes.io/projected/3dd2dac7-df42-40f3-8944-213d34513bc9-kube-api-access-rc7t6\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.552677 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.052644019 +0000 UTC m=+95.048468460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.561896 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l62w\" (UniqueName: \"kubernetes.io/projected/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-kube-api-access-8l62w\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.561959 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-signing-key\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zcgg\" (UniqueName: \"kubernetes.io/projected/516f90dd-64de-4a63-8420-0c963c358692-kube-api-access-5zcgg\") pod \"multus-admission-controller-857f4d67dd-kjslx\" (UID: \"516f90dd-64de-4a63-8420-0c963c358692\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562051 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9161065b-30e0-4eea-b615-829617fe9b26-secret-volume\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562099 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-mountpoint-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562168 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-profile-collector-cert\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562217 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd2dac7-df42-40f3-8944-213d34513bc9-serving-cert\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a00335ed-4674-4448-b37b-b71713264800-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562384 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-signing-cabundle\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562400 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-registration-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562460 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwvs5\" (UniqueName: \"kubernetes.io/projected/4203f041-a5af-47a8-999b-329b617fe415-kube-api-access-kwvs5\") pod \"control-plane-machine-set-operator-78cbb6b69f-hdshg\" (UID: \"4203f041-a5af-47a8-999b-329b617fe415\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562484 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562500 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/edbce9b5-49b4-466d-b96b-dd40e492ede6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562522 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4203f041-a5af-47a8-999b-329b617fe415-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hdshg\" (UID: \"4203f041-a5af-47a8-999b-329b617fe415\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562546 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-plugins-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562585 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/edbce9b5-49b4-466d-b96b-dd40e492ede6-proxy-tls\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/516f90dd-64de-4a63-8420-0c963c358692-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kjslx\" (UID: \"516f90dd-64de-4a63-8420-0c963c358692\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562645 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzz66\" (UniqueName: \"kubernetes.io/projected/48574a66-36e9-4915-a747-5ad9e653d135-kube-api-access-pzz66\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562707 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a00335ed-4674-4448-b37b-b71713264800-srv-cert\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562726 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpdz2\" (UniqueName: \"kubernetes.io/projected/7284a1ab-8a12-4cae-89f6-f1da071d6cce-kube-api-access-zpdz2\") pod \"ingress-canary-twvtx\" (UID: \"7284a1ab-8a12-4cae-89f6-f1da071d6cce\") " pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562746 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f885d7f-713d-48bd-b80a-51807d564fff-webhook-cert\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m868l\" (UniqueName: \"kubernetes.io/projected/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-kube-api-access-m868l\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562863 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqmz6\" (UniqueName: \"kubernetes.io/projected/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-kube-api-access-cqmz6\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.562917 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.565658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dd2dac7-df42-40f3-8944-213d34513bc9-config\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.573917 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9161065b-30e0-4eea-b615-829617fe9b26-config-volume\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.583735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.584821 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-registration-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.586858 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/71ab11f7-6719-4e2a-8993-4c7eed4d51c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-s7wj5\" (UID: \"71ab11f7-6719-4e2a-8993-4c7eed4d51c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.590579 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.599363 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79605b45-524f-433a-88f6-8b7ab42c85e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.615797 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjdh\" (UniqueName: \"kubernetes.io/projected/9cec23b4-8312-4b09-b9ea-b93202b96afd-kube-api-access-hjjdh\") pod \"openshift-controller-manager-operator-756b6f6bc6-jxjcx\" (UID: \"9cec23b4-8312-4b09-b9ea-b93202b96afd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.618873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-registry-tls\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.619373 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0143da39-695c-4027-98ff-b91a5b87777c-certs\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.619491 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41670363-2317-44f9-82cf-e459e23cc97e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.620206 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7284a1ab-8a12-4cae-89f6-f1da071d6cce-cert\") pod \"ingress-canary-twvtx\" (UID: \"7284a1ab-8a12-4cae-89f6-f1da071d6cce\") " pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.620522 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0143da39-695c-4027-98ff-b91a5b87777c-node-bootstrap-token\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.619926 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1f885d7f-713d-48bd-b80a-51807d564fff-apiservice-cert\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.629055 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.629974 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmls\" (UniqueName: \"kubernetes.io/projected/8bb11912-99d3-4d3c-82bf-cc347a2b1d93-kube-api-access-7xmls\") pod \"dns-default-svczn\" (UID: \"8bb11912-99d3-4d3c-82bf-cc347a2b1d93\") " pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.630251 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjd44\" (UniqueName: \"kubernetes.io/projected/7b5b3930-a465-4c33-8efe-273fd9f7ca59-kube-api-access-rjd44\") pod \"migrator-59844c95c7-6zkjn\" (UID: \"7b5b3930-a465-4c33-8efe-273fd9f7ca59\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.631263 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1f885d7f-713d-48bd-b80a-51807d564fff-webhook-cert\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.631303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gj9w\" (UniqueName: \"kubernetes.io/projected/72866af2-cf21-4ff1-bff0-a750c155801d-kube-api-access-9gj9w\") pod \"machine-config-operator-74547568cd-rhb87\" (UID: \"72866af2-cf21-4ff1-bff0-a750c155801d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.631303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/edbce9b5-49b4-466d-b96b-dd40e492ede6-proxy-tls\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.631316 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9161065b-30e0-4eea-b615-829617fe9b26-secret-volume\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.631837 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7tgn\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-kube-api-access-q7tgn\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.632380 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-mountpoint-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.633302 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-signing-cabundle\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.633481 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/edbce9b5-49b4-466d-b96b-dd40e492ede6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.634442 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fd81314a-84fb-4f6d-92f3-de71c92238d9-plugins-dir\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.635493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-bound-sa-token\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.636687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/516f90dd-64de-4a63-8420-0c963c358692-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kjslx\" (UID: \"516f90dd-64de-4a63-8420-0c963c358692\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.640484 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4297e801-77fd-43f7-ba12-4b620088a5d2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kncxm\" (UID: \"4297e801-77fd-43f7-ba12-4b620088a5d2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.645088 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57jw6\" (UniqueName: \"kubernetes.io/projected/1f885d7f-713d-48bd-b80a-51807d564fff-kube-api-access-57jw6\") pod \"packageserver-d55dfcdfc-5xft6\" (UID: \"1f885d7f-713d-48bd-b80a-51807d564fff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.651586 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/79605b45-524f-433a-88f6-8b7ab42c85e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-82hnc\" (UID: \"79605b45-524f-433a-88f6-8b7ab42c85e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.652179 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a00335ed-4674-4448-b37b-b71713264800-srv-cert\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.656973 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-srv-cert\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.661873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvx2\" (UniqueName: \"kubernetes.io/projected/9161065b-30e0-4eea-b615-829617fe9b26-kube-api-access-8nvx2\") pod \"collect-profiles-29486520-f4gd5\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.662610 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a00335ed-4674-4448-b37b-b71713264800-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.664371 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-signing-key\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.665337 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.665457 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.165437602 +0000 UTC m=+95.161262053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.666385 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.666652 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.166644344 +0000 UTC m=+95.162468795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.668790 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-profile-collector-cert\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.668869 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4203f041-a5af-47a8-999b-329b617fe415-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hdshg\" (UID: \"4203f041-a5af-47a8-999b-329b617fe415\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.673000 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.673023 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dd2dac7-df42-40f3-8944-213d34513bc9-serving-cert\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.696571 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxt5\" (UniqueName: \"kubernetes.io/projected/0143da39-695c-4027-98ff-b91a5b87777c-kube-api-access-wkxt5\") pod \"machine-config-server-jkttk\" (UID: \"0143da39-695c-4027-98ff-b91a5b87777c\") " pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.696603 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whtmj\" (UniqueName: \"kubernetes.io/projected/a00335ed-4674-4448-b37b-b71713264800-kube-api-access-whtmj\") pod \"olm-operator-6b444d44fb-h8xb9\" (UID: \"a00335ed-4674-4448-b37b-b71713264800\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.697790 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7q4k\" (UniqueName: \"kubernetes.io/projected/71ab11f7-6719-4e2a-8993-4c7eed4d51c3-kube-api-access-r7q4k\") pod \"package-server-manager-789f6589d5-s7wj5\" (UID: \"71ab11f7-6719-4e2a-8993-4c7eed4d51c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.697995 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6kw9\" (UniqueName: \"kubernetes.io/projected/edbce9b5-49b4-466d-b96b-dd40e492ede6-kube-api-access-k6kw9\") pod \"machine-config-controller-84d6567774-8g4rh\" (UID: \"edbce9b5-49b4-466d-b96b-dd40e492ede6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.698688 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cpr4\" (UniqueName: \"kubernetes.io/projected/fd81314a-84fb-4f6d-92f3-de71c92238d9-kube-api-access-9cpr4\") pod \"csi-hostpathplugin-xnkg6\" (UID: \"fd81314a-84fb-4f6d-92f3-de71c92238d9\") " pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.711600 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.720845 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.738732 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7t6\" (UniqueName: \"kubernetes.io/projected/3dd2dac7-df42-40f3-8944-213d34513bc9-kube-api-access-rc7t6\") pod \"service-ca-operator-777779d784-lm9gw\" (UID: \"3dd2dac7-df42-40f3-8944-213d34513bc9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.738885 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.750901 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwvs5\" (UniqueName: \"kubernetes.io/projected/4203f041-a5af-47a8-999b-329b617fe415-kube-api-access-kwvs5\") pod \"control-plane-machine-set-operator-78cbb6b69f-hdshg\" (UID: \"4203f041-a5af-47a8-999b-329b617fe415\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.759537 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jkttk" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.766987 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-svczn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.767670 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.768354 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l62w\" (UniqueName: \"kubernetes.io/projected/6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c-kube-api-access-8l62w\") pod \"kube-storage-version-migrator-operator-b67b599dd-fcftc\" (UID: \"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.777868 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.779939 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.780142 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.280113024 +0000 UTC m=+95.275937465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.781613 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.281597674 +0000 UTC m=+95.277422115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.781837 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.791407 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.813787 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zcgg\" (UniqueName: \"kubernetes.io/projected/516f90dd-64de-4a63-8420-0c963c358692-kube-api-access-5zcgg\") pod \"multus-admission-controller-857f4d67dd-kjslx\" (UID: \"516f90dd-64de-4a63-8420-0c963c358692\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.831620 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpdz2\" (UniqueName: \"kubernetes.io/projected/7284a1ab-8a12-4cae-89f6-f1da071d6cce-kube-api-access-zpdz2\") pod \"ingress-canary-twvtx\" (UID: \"7284a1ab-8a12-4cae-89f6-f1da071d6cce\") " pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.859013 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzz66\" (UniqueName: \"kubernetes.io/projected/48574a66-36e9-4915-a747-5ad9e653d135-kube-api-access-pzz66\") pod \"marketplace-operator-79b997595-k6fl6\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.860715 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m868l\" (UniqueName: \"kubernetes.io/projected/7b4e9061-966a-40a1-bbc8-dd8dc3bc530f-kube-api-access-m868l\") pod \"catalog-operator-68c6474976-hbb56\" (UID: \"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.882713 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.883132 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.383114896 +0000 UTC m=+95.378939337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.897874 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.902942 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqmz6\" (UniqueName: \"kubernetes.io/projected/e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba-kube-api-access-cqmz6\") pod \"service-ca-9c57cc56f-7vjdm\" (UID: \"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.905337 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.912761 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.922772 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.951303 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.951393 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.953665 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.974541 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.986031 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:19 crc kubenswrapper[4688]: E0123 18:08:19.986381 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.486367195 +0000 UTC m=+95.482191636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:19 crc kubenswrapper[4688]: I0123 18:08:19.995499 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.004361 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.049500 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.086841 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.087272 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.587256371 +0000 UTC m=+95.583080812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.111419 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-twvtx" Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.201361 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.201668 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.701655316 +0000 UTC m=+95.697479757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.304731 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.304990 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.804975386 +0000 UTC m=+95.800799827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.392288 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nshhm" event={"ID":"44b10f0a-1d4c-4d21-9c48-d08b3e18786e","Type":"ContainerStarted","Data":"b1f682fca3e3821431f19bd77a9f20ea5fe58a7a4ae197d196fffc5ebe16685f"} Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.392356 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nshhm" event={"ID":"44b10f0a-1d4c-4d21-9c48-d08b3e18786e","Type":"ContainerStarted","Data":"0e6c858b4a5c605fec44764ef0cd15d25a7030f45e868dcef91c6181407e6032"} Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.413563 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.413928 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:20.913913757 +0000 UTC m=+95.909738198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.428443 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" event={"ID":"155aa9e8-9113-4458-b240-d82a31701801","Type":"ContainerStarted","Data":"2b892df6e7f24af67e71a924b38705811fb0132ca3d574e1dd2b8d5542bca6e5"} Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.433454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jkttk" event={"ID":"0143da39-695c-4027-98ff-b91a5b87777c","Type":"ContainerStarted","Data":"e252b50fccc9639c3f60cf9f0178a9a8b939116dca45e0dcebb823bd09c51346"} Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.435802 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" event={"ID":"326fe39a-476f-411d-aa4e-e2a44c68c841","Type":"ContainerStarted","Data":"a67d48227ce402ec64021db7c5187ea52a2867dae01d810b1c4d6d923620a46d"} Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.516202 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.516398 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.016334033 +0000 UTC m=+96.012158464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.517711 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.524041 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.024024938 +0000 UTC m=+96.019849379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.619792 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.620396 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.120369503 +0000 UTC m=+96.116193944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.693387 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8rxmx"] Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.716416 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8"] Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.721638 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.721961 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.221945647 +0000 UTC m=+96.217770078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.826540 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.826859 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.326841889 +0000 UTC m=+96.322666330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.929157 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:20 crc kubenswrapper[4688]: E0123 18:08:20.929540 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.429525672 +0000 UTC m=+96.425350113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:20 crc kubenswrapper[4688]: W0123 18:08:20.934296 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda65ef93e_9a84_4907_84e4_fcf7248bba7d.slice/crio-34ef81cf3cce9d15f3d64a67faa545e2be815eb30d1d4779cf2fea3b8c344bef WatchSource:0}: Error finding container 34ef81cf3cce9d15f3d64a67faa545e2be815eb30d1d4779cf2fea3b8c344bef: Status 404 returned error can't find the container with id 34ef81cf3cce9d15f3d64a67faa545e2be815eb30d1d4779cf2fea3b8c344bef Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.944627 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vv5s9"] Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.954028 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mrcbl" podStartSLOduration=71.954009644 podStartE2EDuration="1m11.954009644s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:20.953294315 +0000 UTC m=+95.949118756" watchObservedRunningTime="2026-01-23 18:08:20.954009644 +0000 UTC m=+95.949834085" Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.985735 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8"] Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.986212 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" podStartSLOduration=71.986178231 podStartE2EDuration="1m11.986178231s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:20.983875439 +0000 UTC m=+95.979699880" watchObservedRunningTime="2026-01-23 18:08:20.986178231 +0000 UTC m=+95.982002672" Jan 23 18:08:20 crc kubenswrapper[4688]: I0123 18:08:20.998106 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-f29lx"] Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.009392 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx"] Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.010959 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9c7cd"] Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.016714 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" podStartSLOduration=73.016697803 podStartE2EDuration="1m13.016697803s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.016463437 +0000 UTC m=+96.012287868" watchObservedRunningTime="2026-01-23 18:08:21.016697803 +0000 UTC m=+96.012522244" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.029576 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq"] Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.031905 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.032309 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.532293078 +0000 UTC m=+96.528117519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.143075 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" podStartSLOduration=72.143046606 podStartE2EDuration="1m12.143046606s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.122695525 +0000 UTC m=+96.118519966" watchObservedRunningTime="2026-01-23 18:08:21.143046606 +0000 UTC m=+96.138871047" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.147910 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.158314 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.658286132 +0000 UTC m=+96.654110573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: W0123 18:08:21.234982 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4a321be_034e_49be_bcb8_114be9ecc457.slice/crio-6a7825cf6625e5152e08b47fd0cdeba5f910c7c2692fec4cdcc4918324d52c40 WatchSource:0}: Error finding container 6a7825cf6625e5152e08b47fd0cdeba5f910c7c2692fec4cdcc4918324d52c40: Status 404 returned error can't find the container with id 6a7825cf6625e5152e08b47fd0cdeba5f910c7c2692fec4cdcc4918324d52c40 Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.266839 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.267224 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.76712495 +0000 UTC m=+96.762949391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.267376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.267763 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.767747986 +0000 UTC m=+96.763572417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.283705 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.295741 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" podStartSLOduration=72.295712761 podStartE2EDuration="1m12.295712761s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.213841651 +0000 UTC m=+96.209666092" watchObservedRunningTime="2026-01-23 18:08:21.295712761 +0000 UTC m=+96.291537202" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.346655 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47"] Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.376845 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.377250 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.877227101 +0000 UTC m=+96.873051542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.471602 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n8t99" podStartSLOduration=72.471566802 podStartE2EDuration="1m12.471566802s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.469170798 +0000 UTC m=+96.464995259" watchObservedRunningTime="2026-01-23 18:08:21.471566802 +0000 UTC m=+96.467391243" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.478424 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.478827 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:21.978809405 +0000 UTC m=+96.974633846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.526878 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5"] Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.527026 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" event={"ID":"e57d4885-6ec2-4f22-9b2b-548a2ca15c99","Type":"ContainerStarted","Data":"f83abc55af050d590b4881d1b6fd21bcb8b3c3ccec1d2479621aff6df8cca6f2"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.538372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rxmx" event={"ID":"a65ef93e-9a84-4907-84e4-fcf7248bba7d","Type":"ContainerStarted","Data":"34ef81cf3cce9d15f3d64a67faa545e2be815eb30d1d4779cf2fea3b8c344bef"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.542024 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f29lx" event={"ID":"d4a321be-034e-49be-bcb8-114be9ecc457","Type":"ContainerStarted","Data":"6a7825cf6625e5152e08b47fd0cdeba5f910c7c2692fec4cdcc4918324d52c40"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.555814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" event={"ID":"f9fd8784-aa6e-486b-98a6-cc9536032892","Type":"ContainerStarted","Data":"61609c7d0559ec579ab43ad623e94c23feb0ac9be5d5ca14131d3def2ae5a96f"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.559539 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" event={"ID":"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3","Type":"ContainerStarted","Data":"3c8bd6d7ea55174c68334859d1358f3deb67b42095188795043f549afd0f5f8a"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.566013 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jkttk" event={"ID":"0143da39-695c-4027-98ff-b91a5b87777c","Type":"ContainerStarted","Data":"43fd863b38c90902280826cb34da8f3d20d634fbd7efddef7e5357cec890def3"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.577501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" event={"ID":"1ace2e31-8536-4df1-aa9c-78dd8de6f170","Type":"ContainerStarted","Data":"674f1a69dd16f37be2de28f4cbb861805b14760d6cd15ca47056672dcb4bd018"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.579088 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.579555 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.079535776 +0000 UTC m=+97.075360217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.580918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" event={"ID":"326fe39a-476f-411d-aa4e-e2a44c68c841","Type":"ContainerStarted","Data":"92ee33654c7b80c0a9474215ef1d733a8d9cb6038b4c22c31cc30646a20e2abe"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.601028 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" event={"ID":"4bc9750e-684a-4163-85c7-328d7a64ac9b","Type":"ContainerStarted","Data":"bdb4f12f2599423f11de1a86092ffc03ae45d2642f4c28e35d954ae210aee85f"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.615723 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" event={"ID":"155aa9e8-9113-4458-b240-d82a31701801","Type":"ContainerStarted","Data":"0556ad1b10e88f59a66237e13cde4b0adec9515b084bc31cef0eeee0c3e7f753"} Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.636483 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nshhm" podStartSLOduration=72.636446891 podStartE2EDuration="1m12.636446891s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.633310218 +0000 UTC m=+96.629134659" watchObservedRunningTime="2026-01-23 18:08:21.636446891 +0000 UTC m=+96.632271352" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.654218 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-55577" podStartSLOduration=73.654179323 podStartE2EDuration="1m13.654179323s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.653838994 +0000 UTC m=+96.649663435" watchObservedRunningTime="2026-01-23 18:08:21.654179323 +0000 UTC m=+96.650003764" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.672975 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jkttk" podStartSLOduration=5.672959643 podStartE2EDuration="5.672959643s" podCreationTimestamp="2026-01-23 18:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.670797505 +0000 UTC m=+96.666621946" watchObservedRunningTime="2026-01-23 18:08:21.672959643 +0000 UTC m=+96.668784084" Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.682596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.692370 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.192351809 +0000 UTC m=+97.188176340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.784500 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.784842 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.28479646 +0000 UTC m=+97.280620901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.785393 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.785881 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.285870169 +0000 UTC m=+97.281694610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.912281 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.913826 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.413774794 +0000 UTC m=+97.409599235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.914535 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:21 crc kubenswrapper[4688]: E0123 18:08:21.915054 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.415044048 +0000 UTC m=+97.410868489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.995028 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:21 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:21 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:21 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:21 crc kubenswrapper[4688]: I0123 18:08:21.995452 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.016278 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.016453 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.516426595 +0000 UTC m=+97.512251036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.016681 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.017261 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.517242587 +0000 UTC m=+97.513067028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.118055 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.118366 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.618319408 +0000 UTC m=+97.614143849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.118543 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.118981 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.618965865 +0000 UTC m=+97.614790306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.225843 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.226301 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.726285712 +0000 UTC m=+97.722110143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.327429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.328146 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.828132543 +0000 UTC m=+97.823956984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.423112 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.423264 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.429833 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.431729 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:22.93170418 +0000 UTC m=+97.927528621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.452666 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.490340 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:22 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:22 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:22 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.490413 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.491277 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k4cn5" podStartSLOduration=74.491264406 podStartE2EDuration="1m14.491264406s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:21.719893492 +0000 UTC m=+96.715717933" watchObservedRunningTime="2026-01-23 18:08:22.491264406 +0000 UTC m=+97.487088837" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.533961 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.534273 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.0342611 +0000 UTC m=+98.030085541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.632323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" event={"ID":"3b50fa17-e8a8-45bc-bafe-b1ba75fd51d3","Type":"ContainerStarted","Data":"28053f323d21f15abdb8206930cb3e1ef45787d00f1e5e1500c1feb4852bfd85"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.643449 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.644471 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.144441523 +0000 UTC m=+98.140265964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.664951 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" event={"ID":"e57d4885-6ec2-4f22-9b2b-548a2ca15c99","Type":"ContainerStarted","Data":"9e7de1175e8dcd25713cabecaa28724b890e911791a1cfee6a757606e3f1cb30"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.666691 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.677842 4688 patch_prober.go:28] interesting pod/console-operator-58897d9998-vv5s9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.677937 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" podUID="e57d4885-6ec2-4f22-9b2b-548a2ca15c99" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.713902 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rxmx" event={"ID":"a65ef93e-9a84-4907-84e4-fcf7248bba7d","Type":"ContainerStarted","Data":"5fb16bb36401b133791455f69ca04c7e6e228c974d6b0c3ac05714a8f8ef78f8"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.714877 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.727664 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f29lx" event={"ID":"d4a321be-034e-49be-bcb8-114be9ecc457","Type":"ContainerStarted","Data":"a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.738998 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.739023 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hp5vq" podStartSLOduration=73.738999671 podStartE2EDuration="1m13.738999671s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:22.688031534 +0000 UTC m=+97.683855975" watchObservedRunningTime="2026-01-23 18:08:22.738999671 +0000 UTC m=+97.734824112" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.740500 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" podStartSLOduration=73.74048765 podStartE2EDuration="1m13.74048765s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:22.738047795 +0000 UTC m=+97.733872246" watchObservedRunningTime="2026-01-23 18:08:22.74048765 +0000 UTC m=+97.736312091" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.739059 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.745794 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" event={"ID":"4bc9750e-684a-4163-85c7-328d7a64ac9b","Type":"ContainerStarted","Data":"5ea6054364e5e0ce313e7e28de437276bd1f026fb08b12720692d87cc8c5b6f0"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.746695 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.747852 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.247837106 +0000 UTC m=+98.243661547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.771537 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" event={"ID":"9161065b-30e0-4eea-b615-829617fe9b26","Type":"ContainerStarted","Data":"8262c5c17da3bd80a872fc4feda4aaa30db1f7566b95c040bf538f0f6d643c0a"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.774057 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" event={"ID":"1ace2e31-8536-4df1-aa9c-78dd8de6f170","Type":"ContainerStarted","Data":"70d6c6c766483f97e6613574628d13376be538f0af11615d481373c6f15ba692"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.779977 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xxv4w"] Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.802817 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-f29lx" podStartSLOduration=73.802794109 podStartE2EDuration="1m13.802794109s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:22.78366673 +0000 UTC m=+97.779491171" watchObservedRunningTime="2026-01-23 18:08:22.802794109 +0000 UTC m=+97.798618550" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.803508 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" event={"ID":"f3396880-cea3-401c-bcff-b9477770ead5","Type":"ContainerStarted","Data":"9b4e522e22cccd96a2b1b942e8130e47060a85147499657dec925a0ed8966144"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.803550 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" event={"ID":"f3396880-cea3-401c-bcff-b9477770ead5","Type":"ContainerStarted","Data":"e5b7f6604fbcb990fb0e5fc4cef94e4a0637bfc1551f864206137e1cbfa1ea4f"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.812674 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw"] Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.814018 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8rxmx" podStartSLOduration=73.813997117 podStartE2EDuration="1m13.813997117s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:22.807040202 +0000 UTC m=+97.802864653" watchObservedRunningTime="2026-01-23 18:08:22.813997117 +0000 UTC m=+97.809821558" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.853179 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.856526 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.356500979 +0000 UTC m=+98.352325420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.863019 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" podStartSLOduration=74.862996311 podStartE2EDuration="1m14.862996311s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:22.829329195 +0000 UTC m=+97.825153656" watchObservedRunningTime="2026-01-23 18:08:22.862996311 +0000 UTC m=+97.858820762" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.885766 4688 generic.go:334] "Generic (PLEG): container finished" podID="f9fd8784-aa6e-486b-98a6-cc9536032892" containerID="b7aec73386765c9fb93e1e3845da4b32d16afbb8ea1f7de4f094fba44811aced" exitCode=0 Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.887259 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" event={"ID":"f9fd8784-aa6e-486b-98a6-cc9536032892","Type":"ContainerDied","Data":"b7aec73386765c9fb93e1e3845da4b32d16afbb8ea1f7de4f094fba44811aced"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.898563 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wptf8" podStartSLOduration=74.898541228 podStartE2EDuration="1m14.898541228s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:22.895491426 +0000 UTC m=+97.891315867" watchObservedRunningTime="2026-01-23 18:08:22.898541228 +0000 UTC m=+97.894365669" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.907635 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" event={"ID":"13cd9307-1c77-45f6-94c5-b27f7542281b","Type":"ContainerStarted","Data":"c6b36a9b4405abc3092215451bcc22a8ef2783a338f481d64201ec029e5a4d1c"} Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.919970 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gd24s" Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.936315 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9"] Jan 23 18:08:22 crc kubenswrapper[4688]: I0123 18:08:22.961359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:22 crc kubenswrapper[4688]: E0123 18:08:22.961894 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.461869744 +0000 UTC m=+98.457694185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.080045 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.080647 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.580617434 +0000 UTC m=+98.576441875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.189085 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.189451 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.689439331 +0000 UTC m=+98.685263762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.292818 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.293199 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.793167253 +0000 UTC m=+98.788991694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.299620 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:23 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:23 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:23 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.299682 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.391365 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xnkg6"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.395598 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.396048 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:23.896028611 +0000 UTC m=+98.891853052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.409684 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.409800 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jhn49"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.413310 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm"] Jan 23 18:08:23 crc kubenswrapper[4688]: W0123 18:08:23.414613 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd81314a_84fb_4f6d_92f3_de71c92238d9.slice/crio-776df5fd4b0d9a64802d550efd5c348f2a0f7200e83786b881c0a8c8b9e9e5ab WatchSource:0}: Error finding container 776df5fd4b0d9a64802d550efd5c348f2a0f7200e83786b881c0a8c8b9e9e5ab: Status 404 returned error can't find the container with id 776df5fd4b0d9a64802d550efd5c348f2a0f7200e83786b881c0a8c8b9e9e5ab Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.414607 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k6fl6"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.416873 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-svczn"] Jan 23 18:08:23 crc kubenswrapper[4688]: W0123 18:08:23.426000 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba751b5a_7f01_46b9_9734_56d19059f727.slice/crio-b14d880e297ee29766135493c0a256bf75424f5dfd2921fd55f1d4760272bf50 WatchSource:0}: Error finding container b14d880e297ee29766135493c0a256bf75424f5dfd2921fd55f1d4760272bf50: Status 404 returned error can't find the container with id b14d880e297ee29766135493c0a256bf75424f5dfd2921fd55f1d4760272bf50 Jan 23 18:08:23 crc kubenswrapper[4688]: W0123 18:08:23.426194 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48574a66_36e9_4915_a747_5ad9e653d135.slice/crio-d2459e6c8257559cff4a563a7b4827b0c622efa9b341c445f0feff89f5dce05e WatchSource:0}: Error finding container d2459e6c8257559cff4a563a7b4827b0c622efa9b341c445f0feff89f5dce05e: Status 404 returned error can't find the container with id d2459e6c8257559cff4a563a7b4827b0c622efa9b341c445f0feff89f5dce05e Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.429544 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.429607 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-twvtx"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.447027 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.474251 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.475881 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.480493 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.483704 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vjdm"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.497297 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.500884 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.000840941 +0000 UTC m=+98.996665382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.509040 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.519038 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kjslx"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.523411 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.523645 4688 csr.go:261] certificate signing request csr-fhjrt is approved, waiting to be issued Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.526707 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.530232 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.537005 4688 csr.go:257] certificate signing request csr-fhjrt is issued Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.544340 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh"] Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.599217 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.599624 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.09960497 +0000 UTC m=+99.095429411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: W0123 18:08:23.609899 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod516f90dd_64de_4a63_8420_0c963c358692.slice/crio-b8e9769be4e6309bade7479f6d24c3998cd766a2a729f48ab738c40e5ae8434c WatchSource:0}: Error finding container b8e9769be4e6309bade7479f6d24c3998cd766a2a729f48ab738c40e5ae8434c: Status 404 returned error can't find the container with id b8e9769be4e6309bade7479f6d24c3998cd766a2a729f48ab738c40e5ae8434c Jan 23 18:08:23 crc kubenswrapper[4688]: W0123 18:08:23.613368 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedbce9b5_49b4_466d_b96b_dd40e492ede6.slice/crio-0e2b0f9d37370272d89c2d93ff0522795a1e9dbed61480cdbd072d8dcbf2fe93 WatchSource:0}: Error finding container 0e2b0f9d37370272d89c2d93ff0522795a1e9dbed61480cdbd072d8dcbf2fe93: Status 404 returned error can't find the container with id 0e2b0f9d37370272d89c2d93ff0522795a1e9dbed61480cdbd072d8dcbf2fe93 Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.700755 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.703263 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.201079331 +0000 UTC m=+99.196903772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.703369 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.703752 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.203738172 +0000 UTC m=+99.199562613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.804747 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.804973 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.304902315 +0000 UTC m=+99.300726756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.805078 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.805393 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.305383238 +0000 UTC m=+99.301207689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.907283 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.907421 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.407394254 +0000 UTC m=+99.403218695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.908602 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:23 crc kubenswrapper[4688]: E0123 18:08:23.909010 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.408991516 +0000 UTC m=+99.404815957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.911325 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" event={"ID":"516f90dd-64de-4a63-8420-0c963c358692","Type":"ContainerStarted","Data":"b8e9769be4e6309bade7479f6d24c3998cd766a2a729f48ab738c40e5ae8434c"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.912262 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" event={"ID":"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f","Type":"ContainerStarted","Data":"073c8010296cec4d84ac61de4c153e2947c219af260b8f94ab344322b30a0167"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.913946 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" event={"ID":"41317431-17cf-46e5-997e-afcc7b8d01e3","Type":"ContainerStarted","Data":"301e6a6d4711b2793d445d3c94d9c7b6cb26a7a87ded53a593cb63e5dcc41721"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.914003 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" event={"ID":"41317431-17cf-46e5-997e-afcc7b8d01e3","Type":"ContainerStarted","Data":"53bafe9fa801748b386a0e7ea249f8a71713a76a5f6e5dce572c5cb264997894"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.917228 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" event={"ID":"f9fd8784-aa6e-486b-98a6-cc9536032892","Type":"ContainerStarted","Data":"96537a6406b03b02451399a711879ca8f3d8bf75f3077a0aece1bce55e633dc1"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.918673 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" event={"ID":"7b5b3930-a465-4c33-8efe-273fd9f7ca59","Type":"ContainerStarted","Data":"48fad162a5e896e9598ef4ead066dad156f9884cd450e4b067a47bdd65335237"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.919972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" event={"ID":"48574a66-36e9-4915-a747-5ad9e653d135","Type":"ContainerStarted","Data":"d2459e6c8257559cff4a563a7b4827b0c622efa9b341c445f0feff89f5dce05e"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.921360 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" event={"ID":"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c","Type":"ContainerStarted","Data":"bed5b408422d720a9bb8d21ee804a622cea801ce1167fdf3c59219c8741a78d4"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.922519 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-twvtx" event={"ID":"7284a1ab-8a12-4cae-89f6-f1da071d6cce","Type":"ContainerStarted","Data":"51d7f2d1dae2d10c51d7d2aa03e771cf2e8841db0fd453700a16062ec3bc88e4"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.923520 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" event={"ID":"79605b45-524f-433a-88f6-8b7ab42c85e6","Type":"ContainerStarted","Data":"891ac7eda30cd329cd26aed857b6c512df5c19dd71e8f6ede5245ca6a0916a11"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.924522 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" event={"ID":"edbce9b5-49b4-466d-b96b-dd40e492ede6","Type":"ContainerStarted","Data":"0e2b0f9d37370272d89c2d93ff0522795a1e9dbed61480cdbd072d8dcbf2fe93"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.927899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" event={"ID":"a00335ed-4674-4448-b37b-b71713264800","Type":"ContainerStarted","Data":"7b8db562cb374adc51ffde7a1778ffd7a756ff9392bcae77a556e55422d86dbf"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.927971 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" event={"ID":"a00335ed-4674-4448-b37b-b71713264800","Type":"ContainerStarted","Data":"e78416474cc6e3cf40127b5ef25028d2a16c36f1e74d3b106366d4968179c541"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.928038 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.929987 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" event={"ID":"13cd9307-1c77-45f6-94c5-b27f7542281b","Type":"ContainerStarted","Data":"500f50d434c9fcee8f339c33d3b9620e18df113e7c62e7da19394c3388458cb2"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.930042 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" event={"ID":"13cd9307-1c77-45f6-94c5-b27f7542281b","Type":"ContainerStarted","Data":"fae0144d5ae780668bc9e3b9a37e30690b92ca3e6073fa6c6335f6854b0a0f86"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.933339 4688 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-h8xb9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.933401 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" podUID="a00335ed-4674-4448-b37b-b71713264800" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.935979 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xxv4w" podStartSLOduration=74.935953804 podStartE2EDuration="1m14.935953804s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:23.931034483 +0000 UTC m=+98.926858924" watchObservedRunningTime="2026-01-23 18:08:23.935953804 +0000 UTC m=+98.931778295" Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.937567 4688 generic.go:334] "Generic (PLEG): container finished" podID="4bc9750e-684a-4163-85c7-328d7a64ac9b" containerID="5ea6054364e5e0ce313e7e28de437276bd1f026fb08b12720692d87cc8c5b6f0" exitCode=0 Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.938276 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" event={"ID":"4bc9750e-684a-4163-85c7-328d7a64ac9b","Type":"ContainerDied","Data":"5ea6054364e5e0ce313e7e28de437276bd1f026fb08b12720692d87cc8c5b6f0"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.940778 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" event={"ID":"1f885d7f-713d-48bd-b80a-51807d564fff","Type":"ContainerStarted","Data":"26bbec9250c176742c73466d9252bdbccad9df0cd9adc730525465d2e1955e77"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.942651 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" event={"ID":"fd81314a-84fb-4f6d-92f3-de71c92238d9","Type":"ContainerStarted","Data":"776df5fd4b0d9a64802d550efd5c348f2a0f7200e83786b881c0a8c8b9e9e5ab"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.943772 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svczn" event={"ID":"8bb11912-99d3-4d3c-82bf-cc347a2b1d93","Type":"ContainerStarted","Data":"5db65f176f3830a2186fd70cce4aef45798c49313860136333a367d34f5d31f0"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.946358 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" event={"ID":"f3396880-cea3-401c-bcff-b9477770ead5","Type":"ContainerStarted","Data":"cd65b1b373469c079762dac004ed39a2521bcc87f68c7f19244eed13e5c2ae09"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.947932 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" event={"ID":"4203f041-a5af-47a8-999b-329b617fe415","Type":"ContainerStarted","Data":"9ffd452b09e580e127480a522832c81897bcaf989530502876e05637653f49be"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.948887 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" event={"ID":"ba751b5a-7f01-46b9-9734-56d19059f727","Type":"ContainerStarted","Data":"b14d880e297ee29766135493c0a256bf75424f5dfd2921fd55f1d4760272bf50"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.952259 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" event={"ID":"1a4f2b0b-8d76-4871-8197-1c12a79726e3","Type":"ContainerStarted","Data":"b048d254918edf19582e61da242ef0b14e5be2bb06318367de751caa31436251"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.955031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" event={"ID":"9161065b-30e0-4eea-b615-829617fe9b26","Type":"ContainerStarted","Data":"7825ef0e66068d1a88c143b0d44f383320a8607a85d261ce3d6c74def72eebb5"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.956089 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4kq47" podStartSLOduration=74.956068609 podStartE2EDuration="1m14.956068609s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:23.950695116 +0000 UTC m=+98.946519557" watchObservedRunningTime="2026-01-23 18:08:23.956068609 +0000 UTC m=+98.951893050" Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.958204 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" event={"ID":"4297e801-77fd-43f7-ba12-4b620088a5d2","Type":"ContainerStarted","Data":"1c59479cb9938fb4fe67656104e7a93f852b45b88fd19de43fd1002d56a2d39a"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.959478 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" event={"ID":"9cec23b4-8312-4b09-b9ea-b93202b96afd","Type":"ContainerStarted","Data":"e4992eddf08b3077a489d79f692e55c29cf547e9dc552a45884395f225c15d65"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.967812 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" podStartSLOduration=74.967793931 podStartE2EDuration="1m14.967793931s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:23.966495217 +0000 UTC m=+98.962319658" watchObservedRunningTime="2026-01-23 18:08:23.967793931 +0000 UTC m=+98.963618392" Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.970389 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" event={"ID":"3dd2dac7-df42-40f3-8944-213d34513bc9","Type":"ContainerStarted","Data":"7b3a4e7f6cef13571a0bd8f447db7cd16ede784b4bc857c1c2e34b15da8a56b1"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.970434 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" event={"ID":"3dd2dac7-df42-40f3-8944-213d34513bc9","Type":"ContainerStarted","Data":"18434c4b739f51de3b826e6b2680e8aa5c28f8078ddcdd3a13fab086e1d05298"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.974307 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" event={"ID":"71ab11f7-6719-4e2a-8993-4c7eed4d51c3","Type":"ContainerStarted","Data":"16450ed42be728327e0317d9d79513112fd53e18ecf4f9c01c714c956e7d9ff5"} Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.975393 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:23 crc kubenswrapper[4688]: I0123 18:08:23.975426 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.024909 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.026320 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.526289139 +0000 UTC m=+99.522113580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.030621 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.031500 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.531473667 +0000 UTC m=+99.527298108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.042356 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9zmz8" podStartSLOduration=75.042331806 podStartE2EDuration="1m15.042331806s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:24.007854168 +0000 UTC m=+99.003678609" watchObservedRunningTime="2026-01-23 18:08:24.042331806 +0000 UTC m=+99.038156247" Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.132944 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.133162 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.633130193 +0000 UTC m=+99.628954634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.133831 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.134293 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.634282604 +0000 UTC m=+99.630107045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.305651 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.306243 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.80620971 +0000 UTC m=+99.802034161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.314496 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:24 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:24 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:24 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.314592 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.415460 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.416003 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:24.915979783 +0000 UTC m=+99.911804224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.595123 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 18:03:23 +0000 UTC, rotation deadline is 2026-11-10 07:29:43.864038603 +0000 UTC Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.595487 4688 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6973h21m19.268556752s for next certificate rotation Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.596126 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.596438 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.096423756 +0000 UTC m=+100.092248197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.596476 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.596833 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.096821567 +0000 UTC m=+100.092646008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.619027 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vv5s9" Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.651300 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lm9gw" podStartSLOduration=75.651275186 podStartE2EDuration="1m15.651275186s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:24.041536925 +0000 UTC m=+99.037361376" watchObservedRunningTime="2026-01-23 18:08:24.651275186 +0000 UTC m=+99.647099637" Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.700287 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.700785 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.200766064 +0000 UTC m=+100.196590505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.802658 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.802980 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.302968844 +0000 UTC m=+100.298793285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:24 crc kubenswrapper[4688]: I0123 18:08:24.903996 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:24 crc kubenswrapper[4688]: E0123 18:08:24.904523 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.404503077 +0000 UTC m=+100.400327528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.112127 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.112457 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.612444183 +0000 UTC m=+100.608268624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.386916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" event={"ID":"48574a66-36e9-4915-a747-5ad9e653d135","Type":"ContainerStarted","Data":"c857de38ab0ea13a7d8659cebfd78e8792035140d209254629485b98fd678350"} Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.390406 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.392168 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.892138838 +0000 UTC m=+100.887963279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.394939 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k6fl6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.402236 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.402421 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.402748 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:25 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:25 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:25 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.402821 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.410120 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:25.910074756 +0000 UTC m=+100.905899197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.454776 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" event={"ID":"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba","Type":"ContainerStarted","Data":"53697c1417998185088e6ab00d8fb782562188ca458e316ce3ddcfc5d95edb02"} Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.454835 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.454998 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svczn" event={"ID":"8bb11912-99d3-4d3c-82bf-cc347a2b1d93","Type":"ContainerStarted","Data":"21887ade135b0df1e8397b7b7a3907fecad6560fcf33fbbcd7b3e937d3f38906"} Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.465265 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" event={"ID":"72866af2-cf21-4ff1-bff0-a750c155801d","Type":"ContainerStarted","Data":"80003ad6d0f5ba4862cc1b46cf1ec4177b63bdc0b1f095b1e9f4d095f846c6dd"} Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.467376 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.467444 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.485925 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h8xb9" Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.525973 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.527904 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.027883032 +0000 UTC m=+101.023707483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.618322 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" podStartSLOduration=76.618302728 podStartE2EDuration="1m16.618302728s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:25.616834539 +0000 UTC m=+100.612658980" watchObservedRunningTime="2026-01-23 18:08:25.618302728 +0000 UTC m=+100.614127169" Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.629804 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.630170 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.130152263 +0000 UTC m=+101.125976704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.730481 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.730994 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.230975617 +0000 UTC m=+101.226800058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.832695 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.833614 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.333599629 +0000 UTC m=+101.329424070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:25 crc kubenswrapper[4688]: I0123 18:08:25.933362 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:25 crc kubenswrapper[4688]: E0123 18:08:25.933983 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.433963421 +0000 UTC m=+101.429787862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.035046 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.035519 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.535501394 +0000 UTC m=+101.531325835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.152099 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.152585 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.65256275 +0000 UTC m=+101.648387191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.253471 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.253917 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.753901048 +0000 UTC m=+101.749725489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.295431 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:26 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:26 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:26 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.295511 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.373977 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.374127 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.874100538 +0000 UTC m=+101.869924969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.374795 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.375109 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:26.875087064 +0000 UTC m=+101.870911505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.579476 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.579891 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.079875295 +0000 UTC m=+102.075699736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.648452 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" event={"ID":"72866af2-cf21-4ff1-bff0-a750c155801d","Type":"ContainerStarted","Data":"cd28abbe63630cf1aefac6693394ac1a6a1cfbf229df39fee150be05944b2170"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.654101 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" event={"ID":"9cec23b4-8312-4b09-b9ea-b93202b96afd","Type":"ContainerStarted","Data":"6d355dc35d3ff390be56bae07dd093a8fb487b30c5a3e19fe540eb676d685103"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.665447 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" event={"ID":"1a4f2b0b-8d76-4871-8197-1c12a79726e3","Type":"ContainerStarted","Data":"8095b47210024ea9cc212bf52e9a4ec51b07a4cf011b5686108a30439d342a16"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.680783 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.681424 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.181411418 +0000 UTC m=+102.177235859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.704171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" event={"ID":"e70b83d8-0bf4-49d7-afcc-d5240c7bf0ba","Type":"ContainerStarted","Data":"31e180668660b71feb52b2b712932f0761a3d873327327cd7a01948ad474719f"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.709671 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" event={"ID":"1f885d7f-713d-48bd-b80a-51807d564fff","Type":"ContainerStarted","Data":"3f3a558a9a4e7577952e1772d06d0188e5c59dcb73b2da54315a0edcac77be86"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.709662 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-jxjcx" podStartSLOduration=77.70963437 podStartE2EDuration="1m17.70963437s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.709002393 +0000 UTC m=+101.704826834" watchObservedRunningTime="2026-01-23 18:08:26.70963437 +0000 UTC m=+101.705458811" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.710738 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.715050 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xft6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.715395 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" podUID="1f885d7f-713d-48bd-b80a-51807d564fff" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.730484 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" event={"ID":"7b5b3930-a465-4c33-8efe-273fd9f7ca59","Type":"ContainerStarted","Data":"78b641c120a450b7ae55b3962681e5f1e717bb79581d9382dde610f5fd7df86e"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.748315 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t26qj" podStartSLOduration=77.748300409 podStartE2EDuration="1m17.748300409s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.747746194 +0000 UTC m=+101.743570635" watchObservedRunningTime="2026-01-23 18:08:26.748300409 +0000 UTC m=+101.744124850" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.760053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" event={"ID":"4bc9750e-684a-4163-85c7-328d7a64ac9b","Type":"ContainerStarted","Data":"a56d016399e4e22ab53e31a461b288c338c7701b6ffec87d0c246d56728ecc30"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.760755 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.768654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" event={"ID":"71ab11f7-6719-4e2a-8993-4c7eed4d51c3","Type":"ContainerStarted","Data":"1fe4161493789aa60e9d281d12f5246d50e6cb98979fd8270413509049abc3f3"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.778654 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7vjdm" podStartSLOduration=77.778628846 podStartE2EDuration="1m17.778628846s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.775777391 +0000 UTC m=+101.771601832" watchObservedRunningTime="2026-01-23 18:08:26.778628846 +0000 UTC m=+101.774453297" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.783818 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.784353 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.284310368 +0000 UTC m=+102.280134809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.792534 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" event={"ID":"7b4e9061-966a-40a1-bbc8-dd8dc3bc530f","Type":"ContainerStarted","Data":"7ba21c19b20fee9a0050dac7da88b294af4bf69c8b6029c520f5dd9268752e3a"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.792940 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.793907 4688 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hbb56 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.793956 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" podUID="7b4e9061-966a-40a1-bbc8-dd8dc3bc530f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.812166 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svczn" event={"ID":"8bb11912-99d3-4d3c-82bf-cc347a2b1d93","Type":"ContainerStarted","Data":"cf43d7f539e10bd6b4db36fba80b12030ac113ce9034d2a39a3cb6500b1daa73"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.812494 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-svczn" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.835165 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" podStartSLOduration=77.835143181 podStartE2EDuration="1m17.835143181s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.820917852 +0000 UTC m=+101.816742293" watchObservedRunningTime="2026-01-23 18:08:26.835143181 +0000 UTC m=+101.830967622" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.858683 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" event={"ID":"79605b45-524f-433a-88f6-8b7ab42c85e6","Type":"ContainerStarted","Data":"24fa992b6744ba769a8d93c79cc8033ed406a3851c538f10ae3105f8574c044f"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.890541 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.891826 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.391812989 +0000 UTC m=+102.387637430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.892960 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" podStartSLOduration=77.892939199 podStartE2EDuration="1m17.892939199s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.88920546 +0000 UTC m=+101.885029911" watchObservedRunningTime="2026-01-23 18:08:26.892939199 +0000 UTC m=+101.888763640" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.900726 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-twvtx" event={"ID":"7284a1ab-8a12-4cae-89f6-f1da071d6cce","Type":"ContainerStarted","Data":"9384b76a51c78f5b0d0bf5e562b12e13aae67230b04ec4d4301a1130971be7dd"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.916531 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" event={"ID":"516f90dd-64de-4a63-8420-0c963c358692","Type":"ContainerStarted","Data":"7f8d7c17c7b4df201dc575b8e417a3dc50911ec8197a14520a5424b8f32b025c"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.933371 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-svczn" podStartSLOduration=10.933350425 podStartE2EDuration="10.933350425s" podCreationTimestamp="2026-01-23 18:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.931632059 +0000 UTC m=+101.927456500" watchObservedRunningTime="2026-01-23 18:08:26.933350425 +0000 UTC m=+101.929174866" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.941727 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" event={"ID":"edbce9b5-49b4-466d-b96b-dd40e492ede6","Type":"ContainerStarted","Data":"6126cf3d1fed98911c720186503364e5648fac1c98de2e4b76f22eeb1b55639b"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.963971 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" podStartSLOduration=77.96395529 podStartE2EDuration="1m17.96395529s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.959705037 +0000 UTC m=+101.955529478" watchObservedRunningTime="2026-01-23 18:08:26.96395529 +0000 UTC m=+101.959779731" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.976712 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" event={"ID":"4203f041-a5af-47a8-999b-329b617fe415","Type":"ContainerStarted","Data":"3773f9926b16fd0793c59787d93272b99f249da10bac90a2bcadec0f8e15160b"} Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.992640 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.993012 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.492985603 +0000 UTC m=+102.488810044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.993139 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:26 crc kubenswrapper[4688]: I0123 18:08:26.993220 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-82hnc" podStartSLOduration=77.993158457 podStartE2EDuration="1m17.993158457s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:26.989936332 +0000 UTC m=+101.985760773" watchObservedRunningTime="2026-01-23 18:08:26.993158457 +0000 UTC m=+101.988982898" Jan 23 18:08:26 crc kubenswrapper[4688]: E0123 18:08:26.993569 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.493559238 +0000 UTC m=+102.489383679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.029566 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" event={"ID":"f9fd8784-aa6e-486b-98a6-cc9536032892","Type":"ContainerStarted","Data":"a082dc844912c64064512388852f77a5d595fcf2363b54f2378dac5b6ea19764"} Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.047363 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hdshg" podStartSLOduration=78.0473407 podStartE2EDuration="1m18.0473407s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:27.042649075 +0000 UTC m=+102.038473526" watchObservedRunningTime="2026-01-23 18:08:27.0473407 +0000 UTC m=+102.043165141" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.065613 4688 generic.go:334] "Generic (PLEG): container finished" podID="9161065b-30e0-4eea-b615-829617fe9b26" containerID="7825ef0e66068d1a88c143b0d44f383320a8607a85d261ce3d6c74def72eebb5" exitCode=0 Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.065673 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" event={"ID":"9161065b-30e0-4eea-b615-829617fe9b26","Type":"ContainerDied","Data":"7825ef0e66068d1a88c143b0d44f383320a8607a85d261ce3d6c74def72eebb5"} Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.070246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" event={"ID":"4297e801-77fd-43f7-ba12-4b620088a5d2","Type":"ContainerStarted","Data":"887dc9d7478d5e32ce40a7dd5526d5e340ee001995d5475e261ee2b8df087fe3"} Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.097623 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" event={"ID":"ba751b5a-7f01-46b9-9734-56d19059f727","Type":"ContainerStarted","Data":"506877fe7a998df9ee79c9f8d8d34ec6bbf73ecdacaef1045274d427aaf6d63c"} Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.097705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" event={"ID":"ba751b5a-7f01-46b9-9734-56d19059f727","Type":"ContainerStarted","Data":"c285cc49057fcdb03d2a235cba6a91fd25505119dcfe11ce9c95226391743d3b"} Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.098714 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.098843 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.59881328 +0000 UTC m=+102.594637731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.098981 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.099943 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.59992881 +0000 UTC m=+102.595753341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.103696 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" event={"ID":"6fc063c6-3ef7-45b5-8fd5-52c1f27e1f0c","Type":"ContainerStarted","Data":"3be41236a555010f80b5f23527675f06b96cea8b97b417cbc08770880f781d00"} Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.169855 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" event={"ID":"fd81314a-84fb-4f6d-92f3-de71c92238d9","Type":"ContainerStarted","Data":"ccdbfe2e5c82f33289b9dcecc37b59d484fd4e2caaa8c7dd13210beeaf777e4d"} Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.170539 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k6fl6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.170565 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.176435 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-twvtx" podStartSLOduration=11.176415606 podStartE2EDuration="11.176415606s" podCreationTimestamp="2026-01-23 18:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:27.098039699 +0000 UTC m=+102.093864170" watchObservedRunningTime="2026-01-23 18:08:27.176415606 +0000 UTC m=+102.172240047" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.208787 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.210444 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.710423191 +0000 UTC m=+102.706247642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.252551 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-fcftc" podStartSLOduration=78.252526132 podStartE2EDuration="1m18.252526132s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:27.250484777 +0000 UTC m=+102.246309218" watchObservedRunningTime="2026-01-23 18:08:27.252526132 +0000 UTC m=+102.248350573" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.253893 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" podStartSLOduration=78.253879918 podStartE2EDuration="1m18.253879918s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:27.20550694 +0000 UTC m=+102.201331381" watchObservedRunningTime="2026-01-23 18:08:27.253879918 +0000 UTC m=+102.249704359" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.288207 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:27 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:27 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:27 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.288273 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.299371 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kncxm" podStartSLOduration=78.299347968 podStartE2EDuration="1m18.299347968s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:27.297242732 +0000 UTC m=+102.293067183" watchObservedRunningTime="2026-01-23 18:08:27.299347968 +0000 UTC m=+102.295172409" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.313246 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.313874 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.813839864 +0000 UTC m=+102.809664305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.404575 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" podStartSLOduration=79.404554119 podStartE2EDuration="1m19.404554119s" podCreationTimestamp="2026-01-23 18:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:27.404122867 +0000 UTC m=+102.399947308" watchObservedRunningTime="2026-01-23 18:08:27.404554119 +0000 UTC m=+102.400378560" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.415703 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.415968 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.915914241 +0000 UTC m=+102.911738692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.416041 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.416412 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:27.916398784 +0000 UTC m=+102.912223225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.512912 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jhn49" podStartSLOduration=78.512733829 podStartE2EDuration="1m18.512733829s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:27.509576225 +0000 UTC m=+102.505400676" watchObservedRunningTime="2026-01-23 18:08:27.512733829 +0000 UTC m=+102.508558280" Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.517887 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.518563 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.018542043 +0000 UTC m=+103.014366484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.619730 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.620141 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.120125597 +0000 UTC m=+103.115950048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.721234 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.721467 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.221432884 +0000 UTC m=+103.217257325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.721716 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.722088 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.222080522 +0000 UTC m=+103.217904963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.835419 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.835784 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.335768278 +0000 UTC m=+103.331592719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:27 crc kubenswrapper[4688]: I0123 18:08:27.937007 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:27 crc kubenswrapper[4688]: E0123 18:08:27.937585 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.437563298 +0000 UTC m=+103.433387799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.068702 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.069115 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.569091509 +0000 UTC m=+103.564915950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.264150 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.264607 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.764585423 +0000 UTC m=+103.760409924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.293449 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:28 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:28 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:28 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.293532 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.335112 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" event={"ID":"71ab11f7-6719-4e2a-8993-4c7eed4d51c3","Type":"ContainerStarted","Data":"2449f78e686c0eeff74800719b990aef5a163e8bfc3735ec18948d676625fd7c"} Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.335210 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.360381 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" event={"ID":"516f90dd-64de-4a63-8420-0c963c358692","Type":"ContainerStarted","Data":"74ddeb9b70c4ed87e42839e31b8e9d009dab2259c201cecff286298545236f2d"} Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.365096 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.365977 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.865955902 +0000 UTC m=+103.861780343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.366280 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8g4rh" event={"ID":"edbce9b5-49b4-466d-b96b-dd40e492ede6","Type":"ContainerStarted","Data":"e35333a99b3295dfad91bc8ada101d73bd772b3de214ec1385469b7fef4dbcce"} Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.377507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" event={"ID":"72866af2-cf21-4ff1-bff0-a750c155801d","Type":"ContainerStarted","Data":"f9f8e2fa6ec2fd5ea3dfd04f14cf5ac82d39e51643fafb1e893657f7e149e5fd"} Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.385178 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" podStartSLOduration=79.385155393 podStartE2EDuration="1m19.385155393s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:28.383081208 +0000 UTC m=+103.378905659" watchObservedRunningTime="2026-01-23 18:08:28.385155393 +0000 UTC m=+103.380979834" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.392354 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" event={"ID":"7b5b3930-a465-4c33-8efe-273fd9f7ca59","Type":"ContainerStarted","Data":"53bc593e0df27af9ddc34f71c6a9dc102b98aca5833c50f7b714b9342be59fc6"} Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.397064 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k6fl6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.397123 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.397354 4688 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hbb56 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.397403 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" podUID="7b4e9061-966a-40a1-bbc8-dd8dc3bc530f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.401309 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xft6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.401363 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" podUID="1f885d7f-713d-48bd-b80a-51807d564fff" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.470418 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.470934 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:28.970911086 +0000 UTC m=+103.966735527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.480820 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rhb87" podStartSLOduration=79.480792779 podStartE2EDuration="1m19.480792779s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:28.473432733 +0000 UTC m=+103.469257204" watchObservedRunningTime="2026-01-23 18:08:28.480792779 +0000 UTC m=+103.476617220" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.480963 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kjslx" podStartSLOduration=79.480958733 podStartE2EDuration="1m19.480958733s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:28.421302135 +0000 UTC m=+103.417126586" watchObservedRunningTime="2026-01-23 18:08:28.480958733 +0000 UTC m=+103.476783174" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.539000 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6zkjn" podStartSLOduration=79.538978628 podStartE2EDuration="1m19.538978628s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:28.53794873 +0000 UTC m=+103.533773181" watchObservedRunningTime="2026-01-23 18:08:28.538978628 +0000 UTC m=+103.534803069" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.572592 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.574644 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.074609266 +0000 UTC m=+104.070433707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.675178 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.675333 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.677010 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.176990772 +0000 UTC m=+104.172815213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.683148 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44e9c4ca-39a2-42f8-aac2-eca60087c3ed-metrics-certs\") pod \"network-metrics-daemon-kr87l\" (UID: \"44e9c4ca-39a2-42f8-aac2-eca60087c3ed\") " pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.739044 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kr87l" Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.776726 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.777172 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.277151718 +0000 UTC m=+104.272976159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.879138 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.879470 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.379452411 +0000 UTC m=+104.375276862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.979947 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.980206 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.480153692 +0000 UTC m=+104.475978133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:28 crc kubenswrapper[4688]: I0123 18:08:28.980512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:28 crc kubenswrapper[4688]: E0123 18:08:28.980943 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.480928993 +0000 UTC m=+104.476753504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.076100 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.076153 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.076201 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.076253 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.084096 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.084521 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.58450048 +0000 UTC m=+104.580324921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.087704 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.087748 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.089214 4688 patch_prober.go:28] interesting pod/console-f9d7485db-f29lx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.089278 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-f29lx" podUID="d4a321be-034e-49be-bcb8-114be9ecc457" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.110288 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.111033 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.121717 4688 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9c7cd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.121776 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" podUID="f9fd8784-aa6e-486b-98a6-cc9536032892" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.185321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.186817 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.686796832 +0000 UTC m=+104.682621303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.294871 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.295438 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.795409093 +0000 UTC m=+104.791233534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.295713 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.300082 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:29 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:29 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:29 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.300131 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.349609 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.393404 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.403757 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.405685 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:29.905661428 +0000 UTC m=+104.901485939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.419383 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" event={"ID":"fd81314a-84fb-4f6d-92f3-de71c92238d9","Type":"ContainerStarted","Data":"1532464e25c433d7ce700adf50034da4806eba1b24dab2dc6a1d4deb4ca3098b"} Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.428725 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.428971 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5" event={"ID":"9161065b-30e0-4eea-b615-829617fe9b26","Type":"ContainerDied","Data":"8262c5c17da3bd80a872fc4feda4aaa30db1f7566b95c040bf538f0f6d643c0a"} Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.429043 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8262c5c17da3bd80a872fc4feda4aaa30db1f7566b95c040bf538f0f6d643c0a" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.431725 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xft6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.431839 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" podUID="1f885d7f-713d-48bd-b80a-51807d564fff" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.490170 4688 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.507355 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.507425 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9161065b-30e0-4eea-b615-829617fe9b26-secret-volume\") pod \"9161065b-30e0-4eea-b615-829617fe9b26\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.507482 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9161065b-30e0-4eea-b615-829617fe9b26-config-volume\") pod \"9161065b-30e0-4eea-b615-829617fe9b26\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.507632 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nvx2\" (UniqueName: \"kubernetes.io/projected/9161065b-30e0-4eea-b615-829617fe9b26-kube-api-access-8nvx2\") pod \"9161065b-30e0-4eea-b615-829617fe9b26\" (UID: \"9161065b-30e0-4eea-b615-829617fe9b26\") " Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.509213 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.009174674 +0000 UTC m=+105.004999115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.509474 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9161065b-30e0-4eea-b615-829617fe9b26-config-volume" (OuterVolumeSpecName: "config-volume") pod "9161065b-30e0-4eea-b615-829617fe9b26" (UID: "9161065b-30e0-4eea-b615-829617fe9b26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.541815 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9161065b-30e0-4eea-b615-829617fe9b26-kube-api-access-8nvx2" (OuterVolumeSpecName: "kube-api-access-8nvx2") pod "9161065b-30e0-4eea-b615-829617fe9b26" (UID: "9161065b-30e0-4eea-b615-829617fe9b26"). InnerVolumeSpecName "kube-api-access-8nvx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.546454 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9161065b-30e0-4eea-b615-829617fe9b26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9161065b-30e0-4eea-b615-829617fe9b26" (UID: "9161065b-30e0-4eea-b615-829617fe9b26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.609087 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.609327 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nvx2\" (UniqueName: \"kubernetes.io/projected/9161065b-30e0-4eea-b615-829617fe9b26-kube-api-access-8nvx2\") on node \"crc\" DevicePath \"\"" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.609344 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9161065b-30e0-4eea-b615-829617fe9b26-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.609354 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9161065b-30e0-4eea-b615-829617fe9b26-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.610245 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.110229394 +0000 UTC m=+105.106053925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.674557 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xft6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.674617 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" podUID="1f885d7f-713d-48bd-b80a-51807d564fff" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.674675 4688 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xft6 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.674733 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" podUID="1f885d7f-713d-48bd-b80a-51807d564fff" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.711561 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kr87l"] Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.712074 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.712278 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.2122572 +0000 UTC m=+105.208081651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.712422 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.712731 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.212719652 +0000 UTC m=+105.208544093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.787520 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.787491733 podStartE2EDuration="787.491733ms" podCreationTimestamp="2026-01-23 18:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:29.786496276 +0000 UTC m=+104.782320737" watchObservedRunningTime="2026-01-23 18:08:29.787491733 +0000 UTC m=+104.783316174" Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.817782 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.817979 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.317946684 +0000 UTC m=+105.313771135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.818329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.818976 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.3189542 +0000 UTC m=+105.314778711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.925705 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.925973 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.425937998 +0000 UTC m=+105.421762449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:29 crc kubenswrapper[4688]: I0123 18:08:29.926150 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:29 crc kubenswrapper[4688]: E0123 18:08:29.926561 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.426544334 +0000 UTC m=+105.422368775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.029132 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:30 crc kubenswrapper[4688]: E0123 18:08:30.029339 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.5292972 +0000 UTC m=+105.525121651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.029623 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:30 crc kubenswrapper[4688]: E0123 18:08:30.030177 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.530158403 +0000 UTC m=+105.525982844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.046682 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.063108 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hbb56" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.171911 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:30 crc kubenswrapper[4688]: E0123 18:08:30.172127 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.672105761 +0000 UTC m=+105.667930202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.172638 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:30 crc kubenswrapper[4688]: E0123 18:08:30.173070 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 18:08:30.673062107 +0000 UTC m=+105.668886548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6wxpp" (UID: "41670363-2317-44f9-82cf-e459e23cc97e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.266366 4688 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T18:08:29.49024929Z","Handler":null,"Name":""} Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.269100 4688 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.269136 4688 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.273604 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.278683 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.287106 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:30 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:30 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:30 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.287194 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.406136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.412492 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.412541 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.498722 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6wxpp\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.509766 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" event={"ID":"fd81314a-84fb-4f6d-92f3-de71c92238d9","Type":"ContainerStarted","Data":"bdf9c454703bae80e858041468c373d8dcc51401b71cf5a5d4302a33ab673f46"} Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.511578 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kr87l" event={"ID":"44e9c4ca-39a2-42f8-aac2-eca60087c3ed","Type":"ContainerStarted","Data":"c9e977588597b7392cf5c8c89c71a6236e6f1b8efe85a06a0644817d3f199cd5"} Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.511658 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kr87l" event={"ID":"44e9c4ca-39a2-42f8-aac2-eca60087c3ed","Type":"ContainerStarted","Data":"bbed442729ba0d1a495cf538a846148e0dbfa3337d5018007712e9b3fc9ecb88"} Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.582345 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.646124 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gm9fn"] Jan 23 18:08:30 crc kubenswrapper[4688]: E0123 18:08:30.646414 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9161065b-30e0-4eea-b615-829617fe9b26" containerName="collect-profiles" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.646433 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9161065b-30e0-4eea-b615-829617fe9b26" containerName="collect-profiles" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.646564 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9161065b-30e0-4eea-b615-829617fe9b26" containerName="collect-profiles" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.647389 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.650925 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.669595 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gm9fn"] Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.717427 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-utilities\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.717532 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sksj6\" (UniqueName: \"kubernetes.io/projected/4a6f511f-28fb-4a10-bcb5-1409673fef40-kube-api-access-sksj6\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.717635 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-catalog-content\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.842021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-catalog-content\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.842098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-utilities\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.842146 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sksj6\" (UniqueName: \"kubernetes.io/projected/4a6f511f-28fb-4a10-bcb5-1409673fef40-kube-api-access-sksj6\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.843803 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-catalog-content\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.844116 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-utilities\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.908838 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5crr7"] Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.910296 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.914212 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.931650 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sksj6\" (UniqueName: \"kubernetes.io/projected/4a6f511f-28fb-4a10-bcb5-1409673fef40-kube-api-access-sksj6\") pod \"certified-operators-gm9fn\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.942907 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2c8\" (UniqueName: \"kubernetes.io/projected/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-kube-api-access-kv2c8\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.943401 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-catalog-content\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.943573 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-utilities\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:30 crc kubenswrapper[4688]: I0123 18:08:30.967408 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5crr7"] Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.017144 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.044820 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2c8\" (UniqueName: \"kubernetes.io/projected/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-kube-api-access-kv2c8\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.044952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-catalog-content\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.045014 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-utilities\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.045806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-utilities\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.046568 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-catalog-content\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.055903 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n2pmt"] Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.057783 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.067901 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2c8\" (UniqueName: \"kubernetes.io/projected/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-kube-api-access-kv2c8\") pod \"community-operators-5crr7\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.163165 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-catalog-content\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.163225 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-utilities\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.163291 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkxv\" (UniqueName: \"kubernetes.io/projected/7c199c10-940f-4ef3-a6a9-14c611e470a1-kube-api-access-2pkxv\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.226672 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n2pmt"] Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.241319 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.254315 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6db7k"] Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.271767 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.306356 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-catalog-content\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.306569 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-utilities\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.306659 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-catalog-content\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.306685 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddd9\" (UniqueName: \"kubernetes.io/projected/60bcb3bd-df55-4d54-b987-e4195415f2e3-kube-api-access-7ddd9\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.306752 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-utilities\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.307076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkxv\" (UniqueName: \"kubernetes.io/projected/7c199c10-940f-4ef3-a6a9-14c611e470a1-kube-api-access-2pkxv\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.308669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-catalog-content\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.308988 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-utilities\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.331611 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6db7k"] Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.338115 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:31 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:31 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:31 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.338202 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.383730 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkxv\" (UniqueName: \"kubernetes.io/projected/7c199c10-940f-4ef3-a6a9-14c611e470a1-kube-api-access-2pkxv\") pod \"certified-operators-n2pmt\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.400577 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.409522 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-catalog-content\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.409587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-utilities\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.409621 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ddd9\" (UniqueName: \"kubernetes.io/projected/60bcb3bd-df55-4d54-b987-e4195415f2e3-kube-api-access-7ddd9\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.411242 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-catalog-content\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.411351 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-utilities\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.435636 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.436264 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.448558 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.457960 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ddd9\" (UniqueName: \"kubernetes.io/projected/60bcb3bd-df55-4d54-b987-e4195415f2e3-kube-api-access-7ddd9\") pod \"community-operators-6db7k\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.712603 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.717590 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" event={"ID":"fd81314a-84fb-4f6d-92f3-de71c92238d9","Type":"ContainerStarted","Data":"de478a5b6d17a3ba73b308ad961212b63888029b5805e437355c3a45a0c0ce39"} Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.722970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kr87l" event={"ID":"44e9c4ca-39a2-42f8-aac2-eca60087c3ed","Type":"ContainerStarted","Data":"7cc7c431c8028f575e22e98dec85590740b204810e622fa13a08ab797902ae9e"} Jan 23 18:08:31 crc kubenswrapper[4688]: I0123 18:08:31.934770 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kr87l" podStartSLOduration=82.934725293 podStartE2EDuration="1m22.934725293s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:31.800202032 +0000 UTC m=+106.796026473" watchObservedRunningTime="2026-01-23 18:08:31.934725293 +0000 UTC m=+106.930549724" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.322615 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:32 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:32 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:32 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.323258 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.407640 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xnkg6" podStartSLOduration=16.407617311 podStartE2EDuration="16.407617311s" podCreationTimestamp="2026-01-23 18:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:31.939386707 +0000 UTC m=+106.935211158" watchObservedRunningTime="2026-01-23 18:08:32.407617311 +0000 UTC m=+107.403441752" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.408882 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6wxpp"] Jan 23 18:08:32 crc kubenswrapper[4688]: W0123 18:08:32.434538 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41670363_2317_44f9_82cf_e459e23cc97e.slice/crio-b56fb7a463a097b5a30fe8bdbc04b135340b65f66df1701af7209c37a8b1d270 WatchSource:0}: Error finding container b56fb7a463a097b5a30fe8bdbc04b135340b65f66df1701af7209c37a8b1d270: Status 404 returned error can't find the container with id b56fb7a463a097b5a30fe8bdbc04b135340b65f66df1701af7209c37a8b1d270 Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.592859 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.603661 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.611388 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.612297 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.612306 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.689284 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f2edc1f-b243-4577-b289-732a65079eda-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.734735 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" event={"ID":"41670363-2317-44f9-82cf-e459e23cc97e","Type":"ContainerStarted","Data":"b56fb7a463a097b5a30fe8bdbc04b135340b65f66df1701af7209c37a8b1d270"} Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.779987 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5crr7"] Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.791821 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f2edc1f-b243-4577-b289-732a65079eda-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.791940 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f2edc1f-b243-4577-b289-732a65079eda-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.792031 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f2edc1f-b243-4577-b289-732a65079eda-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.800055 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n2pmt"] Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.860903 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gm9fn"] Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.877571 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6db7k"] Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.894066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f2edc1f-b243-4577-b289-732a65079eda-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:32 crc kubenswrapper[4688]: I0123 18:08:32.918048 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f2edc1f-b243-4577-b289-732a65079eda-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.039277 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8gpr"] Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.045933 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.054431 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.058334 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.061820 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8gpr"] Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.100069 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r44xg\" (UniqueName: \"kubernetes.io/projected/cb419c0c-c835-40e8-a2af-166fa2c90791-kube-api-access-r44xg\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.100119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-utilities\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.100288 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-catalog-content\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.201612 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r44xg\" (UniqueName: \"kubernetes.io/projected/cb419c0c-c835-40e8-a2af-166fa2c90791-kube-api-access-r44xg\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.202130 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-utilities\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.202291 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-catalog-content\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.207881 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-utilities\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.208284 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-catalog-content\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.230373 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r44xg\" (UniqueName: \"kubernetes.io/projected/cb419c0c-c835-40e8-a2af-166fa2c90791-kube-api-access-r44xg\") pod \"redhat-marketplace-v8gpr\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.285628 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:33 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:33 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:33 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.285689 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.378010 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.414040 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.422405 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rrnkb"] Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.423702 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.438359 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrnkb"] Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.508323 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-catalog-content\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.508403 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5st\" (UniqueName: \"kubernetes.io/projected/c6a2302e-9cf7-4138-9dde-67aaabe46490-kube-api-access-zp5st\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.508428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-utilities\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.608846 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-catalog-content\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.608923 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5st\" (UniqueName: \"kubernetes.io/projected/c6a2302e-9cf7-4138-9dde-67aaabe46490-kube-api-access-zp5st\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.608943 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-utilities\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.609401 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-utilities\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.609626 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-catalog-content\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.643489 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5st\" (UniqueName: \"kubernetes.io/projected/c6a2302e-9cf7-4138-9dde-67aaabe46490-kube-api-access-zp5st\") pod \"redhat-marketplace-rrnkb\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.676457 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8gpr"] Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.740366 4688 generic.go:334] "Generic (PLEG): container finished" podID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerID="d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422" exitCode=0 Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.740677 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2pmt" event={"ID":"7c199c10-940f-4ef3-a6a9-14c611e470a1","Type":"ContainerDied","Data":"d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.740753 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2pmt" event={"ID":"7c199c10-940f-4ef3-a6a9-14c611e470a1","Type":"ContainerStarted","Data":"90f92b5316c452343f53a13f83948a2277b0a58e4d7fdf9287c0229e9e5aa434"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.743918 4688 generic.go:334] "Generic (PLEG): container finished" podID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerID="1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf" exitCode=0 Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.743983 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gm9fn" event={"ID":"4a6f511f-28fb-4a10-bcb5-1409673fef40","Type":"ContainerDied","Data":"1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.744016 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gm9fn" event={"ID":"4a6f511f-28fb-4a10-bcb5-1409673fef40","Type":"ContainerStarted","Data":"126f21c22dfd1948149513aa998c557d9c5303599112528d87fd0417afae8c1f"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.746778 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crr7" event={"ID":"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f","Type":"ContainerStarted","Data":"d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.746836 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crr7" event={"ID":"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f","Type":"ContainerStarted","Data":"964849483a4f7c1b46b46a1f584af82c82f42930c67831bbd15399580a0c3ea8"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.752601 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" event={"ID":"41670363-2317-44f9-82cf-e459e23cc97e","Type":"ContainerStarted","Data":"dc17538b81dfefcc659404261fcd2ea5b7e31c598971de6e968a20abb5d38a70"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.754833 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.758552 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f2edc1f-b243-4577-b289-732a65079eda","Type":"ContainerStarted","Data":"739159fdcd96ad8dace0c15c731f0d5c4be5a1c215ebcad22b32acab45c3c49e"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.763378 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6db7k" event={"ID":"60bcb3bd-df55-4d54-b987-e4195415f2e3","Type":"ContainerStarted","Data":"e3144033bd81ed0869fec200d9a41629b9d587dd77315feb6a973c92f79826c4"} Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.841095 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lh47m"] Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.842439 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.845246 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.914445 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxksq\" (UniqueName: \"kubernetes.io/projected/32ce53aa-adb0-4e56-93b9-acf618ee0546-kube-api-access-rxksq\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.914630 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-utilities\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.914658 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-catalog-content\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:33 crc kubenswrapper[4688]: I0123 18:08:33.921842 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lh47m"] Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.016302 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-utilities\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.016360 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-catalog-content\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.016449 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxksq\" (UniqueName: \"kubernetes.io/projected/32ce53aa-adb0-4e56-93b9-acf618ee0546-kube-api-access-rxksq\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.016940 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-utilities\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.017168 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-catalog-content\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.030556 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4npnz"] Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.031759 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.048055 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxksq\" (UniqueName: \"kubernetes.io/projected/32ce53aa-adb0-4e56-93b9-acf618ee0546-kube-api-access-rxksq\") pod \"redhat-operators-lh47m\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.052421 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4npnz"] Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.117355 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-utilities\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.120992 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-catalog-content\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.121077 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8m8q\" (UniqueName: \"kubernetes.io/projected/7041a10e-482a-4225-b1c4-729d143310a5-kube-api-access-k8m8q\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.125696 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.140832 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9c7cd" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.161933 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.222865 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-catalog-content\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.222966 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8m8q\" (UniqueName: \"kubernetes.io/projected/7041a10e-482a-4225-b1c4-729d143310a5-kube-api-access-k8m8q\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.223147 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-utilities\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.225601 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-utilities\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.226407 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-catalog-content\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.309378 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:34 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:34 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:34 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.309440 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.697738 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8m8q\" (UniqueName: \"kubernetes.io/projected/7041a10e-482a-4225-b1c4-729d143310a5-kube-api-access-k8m8q\") pod \"redhat-operators-4npnz\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.863975 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.870486 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-svczn" Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.909792 4688 generic.go:334] "Generic (PLEG): container finished" podID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerID="d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52" exitCode=0 Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.909906 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crr7" event={"ID":"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f","Type":"ContainerDied","Data":"d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52"} Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.920204 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.930246 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f2edc1f-b243-4577-b289-732a65079eda","Type":"ContainerStarted","Data":"c525165d3f54e9fb720c9c579e00a4631542d375b0eb59d0a08c5f0aa1e744a7"} Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.987166 4688 generic.go:334] "Generic (PLEG): container finished" podID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerID="0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7" exitCode=0 Jan 23 18:08:34 crc kubenswrapper[4688]: I0123 18:08:34.988274 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6db7k" event={"ID":"60bcb3bd-df55-4d54-b987-e4195415f2e3","Type":"ContainerDied","Data":"0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7"} Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.005221 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8gpr" event={"ID":"cb419c0c-c835-40e8-a2af-166fa2c90791","Type":"ContainerStarted","Data":"32b2c10e7b8481f14ec3450aa0e4449871a885e05787f867e076adf66486bf77"} Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.005291 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.043122 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrnkb"] Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.043512 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.043444037 podStartE2EDuration="3.043444037s" podCreationTimestamp="2026-01-23 18:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:35.036107492 +0000 UTC m=+110.031931933" watchObservedRunningTime="2026-01-23 18:08:35.043444037 +0000 UTC m=+110.039268498" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.122853 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" podStartSLOduration=86.122831731 podStartE2EDuration="1m26.122831731s" podCreationTimestamp="2026-01-23 18:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:35.094134577 +0000 UTC m=+110.089959018" watchObservedRunningTime="2026-01-23 18:08:35.122831731 +0000 UTC m=+110.118656172" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.289502 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:35 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:35 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:35 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.289895 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.666104 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.667299 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.671269 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lh47m"] Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.672350 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.673597 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.678147 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.770688 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2391f3bf-c38f-451f-9425-e681727685b9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.770780 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2391f3bf-c38f-451f-9425-e681727685b9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.784067 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4npnz"] Jan 23 18:08:35 crc kubenswrapper[4688]: W0123 18:08:35.795975 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7041a10e_482a_4225_b1c4_729d143310a5.slice/crio-abff22043bd8a8d3af51e33f3df6b9e5f4166c40c9b73a923f9086331dcab85f WatchSource:0}: Error finding container abff22043bd8a8d3af51e33f3df6b9e5f4166c40c9b73a923f9086331dcab85f: Status 404 returned error can't find the container with id abff22043bd8a8d3af51e33f3df6b9e5f4166c40c9b73a923f9086331dcab85f Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.872096 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2391f3bf-c38f-451f-9425-e681727685b9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.872219 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2391f3bf-c38f-451f-9425-e681727685b9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.872277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2391f3bf-c38f-451f-9425-e681727685b9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:35 crc kubenswrapper[4688]: I0123 18:08:35.893851 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2391f3bf-c38f-451f-9425-e681727685b9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.016606 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrnkb" event={"ID":"c6a2302e-9cf7-4138-9dde-67aaabe46490","Type":"ContainerStarted","Data":"85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62"} Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.016703 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrnkb" event={"ID":"c6a2302e-9cf7-4138-9dde-67aaabe46490","Type":"ContainerStarted","Data":"7f04661b133444a71cfca530837cf3a0ce5072dcdf89c2f5dd706f26f63021cc"} Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.021461 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.023428 4688 generic.go:334] "Generic (PLEG): container finished" podID="8f2edc1f-b243-4577-b289-732a65079eda" containerID="c525165d3f54e9fb720c9c579e00a4631542d375b0eb59d0a08c5f0aa1e744a7" exitCode=0 Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.023536 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f2edc1f-b243-4577-b289-732a65079eda","Type":"ContainerDied","Data":"c525165d3f54e9fb720c9c579e00a4631542d375b0eb59d0a08c5f0aa1e744a7"} Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.025261 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerID="836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b" exitCode=0 Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.026520 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8gpr" event={"ID":"cb419c0c-c835-40e8-a2af-166fa2c90791","Type":"ContainerDied","Data":"836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b"} Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.033704 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4npnz" event={"ID":"7041a10e-482a-4225-b1c4-729d143310a5","Type":"ContainerStarted","Data":"abff22043bd8a8d3af51e33f3df6b9e5f4166c40c9b73a923f9086331dcab85f"} Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.035332 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh47m" event={"ID":"32ce53aa-adb0-4e56-93b9-acf618ee0546","Type":"ContainerStarted","Data":"dbf6d81a3c455ca4b5e70dccaeafd6a8135455414cf8464ab39a1ac29899980b"} Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.285511 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:36 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:36 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:36 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.285612 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:36 crc kubenswrapper[4688]: I0123 18:08:36.349492 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 18:08:36 crc kubenswrapper[4688]: W0123 18:08:36.458312 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2391f3bf_c38f_451f_9425_e681727685b9.slice/crio-152daa9a749271f83dec1ab9de8ce6b963ef5c5aa435bfcdb17f367044bd3f5c WatchSource:0}: Error finding container 152daa9a749271f83dec1ab9de8ce6b963ef5c5aa435bfcdb17f367044bd3f5c: Status 404 returned error can't find the container with id 152daa9a749271f83dec1ab9de8ce6b963ef5c5aa435bfcdb17f367044bd3f5c Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.046729 4688 generic.go:334] "Generic (PLEG): container finished" podID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerID="85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62" exitCode=0 Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.047052 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrnkb" event={"ID":"c6a2302e-9cf7-4138-9dde-67aaabe46490","Type":"ContainerDied","Data":"85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62"} Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.050990 4688 generic.go:334] "Generic (PLEG): container finished" podID="7041a10e-482a-4225-b1c4-729d143310a5" containerID="05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f" exitCode=0 Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.051048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4npnz" event={"ID":"7041a10e-482a-4225-b1c4-729d143310a5","Type":"ContainerDied","Data":"05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f"} Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.055235 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2391f3bf-c38f-451f-9425-e681727685b9","Type":"ContainerStarted","Data":"152daa9a749271f83dec1ab9de8ce6b963ef5c5aa435bfcdb17f367044bd3f5c"} Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.059691 4688 generic.go:334] "Generic (PLEG): container finished" podID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerID="050263e751f00fb49f56e053185173af422c5406a93bd60012102e44bb3562d4" exitCode=0 Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.060856 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh47m" event={"ID":"32ce53aa-adb0-4e56-93b9-acf618ee0546","Type":"ContainerDied","Data":"050263e751f00fb49f56e053185173af422c5406a93bd60012102e44bb3562d4"} Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.293246 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:37 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:37 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:37 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.293599 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:37 crc kubenswrapper[4688]: I0123 18:08:37.997419 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.074523 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2391f3bf-c38f-451f-9425-e681727685b9","Type":"ContainerStarted","Data":"b6804b73cf499b40981af3795bdf58877a2e066655f536f6d5c936147fd15c84"} Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.074779 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f2edc1f-b243-4577-b289-732a65079eda-kubelet-dir\") pod \"8f2edc1f-b243-4577-b289-732a65079eda\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.074900 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f2edc1f-b243-4577-b289-732a65079eda-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f2edc1f-b243-4577-b289-732a65079eda" (UID: "8f2edc1f-b243-4577-b289-732a65079eda"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.074978 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f2edc1f-b243-4577-b289-732a65079eda-kube-api-access\") pod \"8f2edc1f-b243-4577-b289-732a65079eda\" (UID: \"8f2edc1f-b243-4577-b289-732a65079eda\") " Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.075777 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f2edc1f-b243-4577-b289-732a65079eda-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.079135 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8f2edc1f-b243-4577-b289-732a65079eda","Type":"ContainerDied","Data":"739159fdcd96ad8dace0c15c731f0d5c4be5a1c215ebcad22b32acab45c3c49e"} Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.079195 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="739159fdcd96ad8dace0c15c731f0d5c4be5a1c215ebcad22b32acab45c3c49e" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.079259 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.136611 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.136587297 podStartE2EDuration="3.136587297s" podCreationTimestamp="2026-01-23 18:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:08:38.135433546 +0000 UTC m=+113.131257997" watchObservedRunningTime="2026-01-23 18:08:38.136587297 +0000 UTC m=+113.132411738" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.172167 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2edc1f-b243-4577-b289-732a65079eda-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f2edc1f-b243-4577-b289-732a65079eda" (UID: "8f2edc1f-b243-4577-b289-732a65079eda"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.183598 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f2edc1f-b243-4577-b289-732a65079eda-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.328968 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:38 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:38 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:38 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:38 crc kubenswrapper[4688]: I0123 18:08:38.329020 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.075000 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.078723 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.083899 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.083986 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.089166 4688 patch_prober.go:28] interesting pod/console-f9d7485db-f29lx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.089377 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-f29lx" podUID="d4a321be-034e-49be-bcb8-114be9ecc457" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.283776 4688 patch_prober.go:28] interesting pod/router-default-5444994796-nshhm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 18:08:39 crc kubenswrapper[4688]: [-]has-synced failed: reason withheld Jan 23 18:08:39 crc kubenswrapper[4688]: [+]process-running ok Jan 23 18:08:39 crc kubenswrapper[4688]: healthz check failed Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.283840 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nshhm" podUID="44b10f0a-1d4c-4d21-9c48-d08b3e18786e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 18:08:39 crc kubenswrapper[4688]: I0123 18:08:39.857658 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xft6" Jan 23 18:08:40 crc kubenswrapper[4688]: I0123 18:08:40.112562 4688 generic.go:334] "Generic (PLEG): container finished" podID="2391f3bf-c38f-451f-9425-e681727685b9" containerID="b6804b73cf499b40981af3795bdf58877a2e066655f536f6d5c936147fd15c84" exitCode=0 Jan 23 18:08:40 crc kubenswrapper[4688]: I0123 18:08:40.112877 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2391f3bf-c38f-451f-9425-e681727685b9","Type":"ContainerDied","Data":"b6804b73cf499b40981af3795bdf58877a2e066655f536f6d5c936147fd15c84"} Jan 23 18:08:40 crc kubenswrapper[4688]: I0123 18:08:40.304026 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:40 crc kubenswrapper[4688]: I0123 18:08:40.317071 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nshhm" Jan 23 18:08:42 crc kubenswrapper[4688]: I0123 18:08:42.403919 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:42 crc kubenswrapper[4688]: I0123 18:08:42.582760 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2391f3bf-c38f-451f-9425-e681727685b9-kubelet-dir\") pod \"2391f3bf-c38f-451f-9425-e681727685b9\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " Jan 23 18:08:42 crc kubenswrapper[4688]: I0123 18:08:42.582880 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2391f3bf-c38f-451f-9425-e681727685b9-kube-api-access\") pod \"2391f3bf-c38f-451f-9425-e681727685b9\" (UID: \"2391f3bf-c38f-451f-9425-e681727685b9\") " Jan 23 18:08:42 crc kubenswrapper[4688]: I0123 18:08:42.582889 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2391f3bf-c38f-451f-9425-e681727685b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2391f3bf-c38f-451f-9425-e681727685b9" (UID: "2391f3bf-c38f-451f-9425-e681727685b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:08:42 crc kubenswrapper[4688]: I0123 18:08:42.583290 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2391f3bf-c38f-451f-9425-e681727685b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:08:42 crc kubenswrapper[4688]: I0123 18:08:42.622391 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2391f3bf-c38f-451f-9425-e681727685b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2391f3bf-c38f-451f-9425-e681727685b9" (UID: "2391f3bf-c38f-451f-9425-e681727685b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:08:42 crc kubenswrapper[4688]: I0123 18:08:42.684200 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2391f3bf-c38f-451f-9425-e681727685b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:08:43 crc kubenswrapper[4688]: I0123 18:08:43.264779 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2391f3bf-c38f-451f-9425-e681727685b9","Type":"ContainerDied","Data":"152daa9a749271f83dec1ab9de8ce6b963ef5c5aa435bfcdb17f367044bd3f5c"} Jan 23 18:08:43 crc kubenswrapper[4688]: I0123 18:08:43.264846 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="152daa9a749271f83dec1ab9de8ce6b963ef5c5aa435bfcdb17f367044bd3f5c" Jan 23 18:08:43 crc kubenswrapper[4688]: I0123 18:08:43.264935 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.073480 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.074082 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.073578 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.074242 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.074140 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.075031 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.075083 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.075547 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"5fb16bb36401b133791455f69ca04c7e6e228c974d6b0c3ac05714a8f8ef78f8"} pod="openshift-console/downloads-7954f5f757-8rxmx" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.075756 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" containerID="cri-o://5fb16bb36401b133791455f69ca04c7e6e228c974d6b0c3ac05714a8f8ef78f8" gracePeriod=2 Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.137526 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:49 crc kubenswrapper[4688]: I0123 18:08:49.146280 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:08:50 crc kubenswrapper[4688]: I0123 18:08:50.589349 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:08:50 crc kubenswrapper[4688]: I0123 18:08:50.858868 4688 generic.go:334] "Generic (PLEG): container finished" podID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerID="5fb16bb36401b133791455f69ca04c7e6e228c974d6b0c3ac05714a8f8ef78f8" exitCode=0 Jan 23 18:08:50 crc kubenswrapper[4688]: I0123 18:08:50.858920 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rxmx" event={"ID":"a65ef93e-9a84-4907-84e4-fcf7248bba7d","Type":"ContainerDied","Data":"5fb16bb36401b133791455f69ca04c7e6e228c974d6b0c3ac05714a8f8ef78f8"} Jan 23 18:08:59 crc kubenswrapper[4688]: I0123 18:08:59.074867 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:08:59 crc kubenswrapper[4688]: I0123 18:08:59.075421 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:08:59 crc kubenswrapper[4688]: I0123 18:08:59.726395 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" Jan 23 18:09:09 crc kubenswrapper[4688]: I0123 18:09:09.072828 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:09 crc kubenswrapper[4688]: I0123 18:09:09.073401 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.104441 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 18:09:11 crc kubenswrapper[4688]: E0123 18:09:11.106068 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2edc1f-b243-4577-b289-732a65079eda" containerName="pruner" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.106086 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2edc1f-b243-4577-b289-732a65079eda" containerName="pruner" Jan 23 18:09:11 crc kubenswrapper[4688]: E0123 18:09:11.106096 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2391f3bf-c38f-451f-9425-e681727685b9" containerName="pruner" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.106102 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2391f3bf-c38f-451f-9425-e681727685b9" containerName="pruner" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.106257 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2edc1f-b243-4577-b289-732a65079eda" containerName="pruner" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.106275 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2391f3bf-c38f-451f-9425-e681727685b9" containerName="pruner" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.106716 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.109700 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.109888 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.112617 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.136850 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/351bfca8-fc19-4257-abb5-536a92a7bd76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.136993 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/351bfca8-fc19-4257-abb5-536a92a7bd76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.237819 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/351bfca8-fc19-4257-abb5-536a92a7bd76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.237939 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/351bfca8-fc19-4257-abb5-536a92a7bd76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.238402 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/351bfca8-fc19-4257-abb5-536a92a7bd76-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.258496 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/351bfca8-fc19-4257-abb5-536a92a7bd76-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:11 crc kubenswrapper[4688]: I0123 18:09:11.440154 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.310008 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.311866 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.314111 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.361328 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.361421 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.361461 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.361503 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.364961 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.365061 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.365634 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.374126 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.374430 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.387994 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.388070 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.388209 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.462902 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-var-lock\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.463031 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.463121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.564625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.565054 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-var-lock\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.565328 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.566015 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.566228 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-var-lock\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.603916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kube-api-access\") pod \"installer-9-crc\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.607775 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.615920 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.626981 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 18:09:15 crc kubenswrapper[4688]: I0123 18:09:15.632179 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:09:17 crc kubenswrapper[4688]: E0123 18:09:17.656227 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 18:09:17 crc kubenswrapper[4688]: E0123 18:09:17.656972 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxksq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lh47m_openshift-marketplace(32ce53aa-adb0-4e56-93b9-acf618ee0546): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:17 crc kubenswrapper[4688]: E0123 18:09:17.658296 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lh47m" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" Jan 23 18:09:19 crc kubenswrapper[4688]: I0123 18:09:19.081024 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:19 crc kubenswrapper[4688]: I0123 18:09:19.081414 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:20 crc kubenswrapper[4688]: E0123 18:09:20.176045 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lh47m" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" Jan 23 18:09:20 crc kubenswrapper[4688]: E0123 18:09:20.262006 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 18:09:20 crc kubenswrapper[4688]: E0123 18:09:20.262332 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r44xg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v8gpr_openshift-marketplace(cb419c0c-c835-40e8-a2af-166fa2c90791): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:20 crc kubenswrapper[4688]: E0123 18:09:20.263558 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-v8gpr" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" Jan 23 18:09:20 crc kubenswrapper[4688]: E0123 18:09:20.270463 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 18:09:20 crc kubenswrapper[4688]: E0123 18:09:20.270765 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zp5st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rrnkb_openshift-marketplace(c6a2302e-9cf7-4138-9dde-67aaabe46490): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:20 crc kubenswrapper[4688]: E0123 18:09:20.272016 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rrnkb" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" Jan 23 18:09:21 crc kubenswrapper[4688]: E0123 18:09:21.648461 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v8gpr" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" Jan 23 18:09:21 crc kubenswrapper[4688]: E0123 18:09:21.649016 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rrnkb" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" Jan 23 18:09:21 crc kubenswrapper[4688]: E0123 18:09:21.717442 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 18:09:21 crc kubenswrapper[4688]: E0123 18:09:21.717618 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sksj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gm9fn_openshift-marketplace(4a6f511f-28fb-4a10-bcb5-1409673fef40): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:21 crc kubenswrapper[4688]: E0123 18:09:21.718874 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gm9fn" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" Jan 23 18:09:22 crc kubenswrapper[4688]: E0123 18:09:22.053860 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 18:09:22 crc kubenswrapper[4688]: E0123 18:09:22.054328 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8m8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4npnz_openshift-marketplace(7041a10e-482a-4225-b1c4-729d143310a5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:22 crc kubenswrapper[4688]: E0123 18:09:22.055616 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4npnz" podUID="7041a10e-482a-4225-b1c4-729d143310a5" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.062254 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4npnz" podUID="7041a10e-482a-4225-b1c4-729d143310a5" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.146421 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.147811 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv2c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5crr7_openshift-marketplace(b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.149291 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5crr7" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.178353 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.178638 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pkxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n2pmt_openshift-marketplace(7c199c10-940f-4ef3-a6a9-14c611e470a1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.179025 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.179102 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ddd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6db7k_openshift-marketplace(60bcb3bd-df55-4d54-b987-e4195415f2e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.180240 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6db7k" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.180287 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-n2pmt" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.282841 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5crr7" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" Jan 23 18:09:25 crc kubenswrapper[4688]: E0123 18:09:25.283253 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6db7k" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.085367 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 18:09:26 crc kubenswrapper[4688]: W0123 18:09:26.097514 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-b77bf03069b6d808d5d24eef31c3b87e74c77368d37e618de451a0e4ae1635da WatchSource:0}: Error finding container b77bf03069b6d808d5d24eef31c3b87e74c77368d37e618de451a0e4ae1635da: Status 404 returned error can't find the container with id b77bf03069b6d808d5d24eef31c3b87e74c77368d37e618de451a0e4ae1635da Jan 23 18:09:26 crc kubenswrapper[4688]: W0123 18:09:26.100323 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod351bfca8_fc19_4257_abb5_536a92a7bd76.slice/crio-0a5ca0f014d30afb1a008505e304bf1802392daa952618d6eb07fff0a14764f3 WatchSource:0}: Error finding container 0a5ca0f014d30afb1a008505e304bf1802392daa952618d6eb07fff0a14764f3: Status 404 returned error can't find the container with id 0a5ca0f014d30afb1a008505e304bf1802392daa952618d6eb07fff0a14764f3 Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.231725 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 18:09:26 crc kubenswrapper[4688]: W0123 18:09:26.245705 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7ff053fa_a174_4323_a28d_6e8173d1c8b7.slice/crio-eb6906d434d11c3976d393190ec79abf6f45a7aa69a535685c92427aaa58b792 WatchSource:0}: Error finding container eb6906d434d11c3976d393190ec79abf6f45a7aa69a535685c92427aaa58b792: Status 404 returned error can't find the container with id eb6906d434d11c3976d393190ec79abf6f45a7aa69a535685c92427aaa58b792 Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.289882 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9fe3500af8fea19cab96acd3fe80a5fe7596a95f83ac5003b158062f787c9ed7"} Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.291261 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7ff053fa-a174-4323-a28d-6e8173d1c8b7","Type":"ContainerStarted","Data":"eb6906d434d11c3976d393190ec79abf6f45a7aa69a535685c92427aaa58b792"} Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.293700 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"640bb976602d34d035568216db73a6b7c6749265d4ec50d594837e4ad2435235"} Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.296108 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b77bf03069b6d808d5d24eef31c3b87e74c77368d37e618de451a0e4ae1635da"} Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.299813 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"351bfca8-fc19-4257-abb5-536a92a7bd76","Type":"ContainerStarted","Data":"0a5ca0f014d30afb1a008505e304bf1802392daa952618d6eb07fff0a14764f3"} Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.301587 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8rxmx" event={"ID":"a65ef93e-9a84-4907-84e4-fcf7248bba7d","Type":"ContainerStarted","Data":"539c408c8abffd80c531deb92948a068b5fe1e2c84fe9e8c74ce0f350c927e83"} Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.303147 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.303250 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:26 crc kubenswrapper[4688]: I0123 18:09:26.303282 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.319458 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b2d80d99d78a4496b7e16aef91302789bcf9dffbbef17b97b3ae9cba8e997aa0"} Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.321935 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"351bfca8-fc19-4257-abb5-536a92a7bd76","Type":"ContainerStarted","Data":"e0491339bf560e270deaa7b82f01dd1f87a0f9d9611b8e00340ed8cced52594a"} Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.324883 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"aac135f2aa1c5a4c9c448d98f136eb4f53acfda56671a02a7c5e19f9fb5163e5"} Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.327024 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7ff053fa-a174-4323-a28d-6e8173d1c8b7","Type":"ContainerStarted","Data":"4f130187ee9da4ae735fec7d3bec708ae929cd499421820fbb7281593dee982f"} Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.329670 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.329715 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.330059 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c662bc890a9375cff80a53ed6bdb137d720e1cb5314700388fc64f85ef076303"} Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.330177 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.427950 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.427923352 podStartE2EDuration="16.427923352s" podCreationTimestamp="2026-01-23 18:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:09:27.426050028 +0000 UTC m=+162.421874479" watchObservedRunningTime="2026-01-23 18:09:27.427923352 +0000 UTC m=+162.423747803" Jan 23 18:09:27 crc kubenswrapper[4688]: I0123 18:09:27.450828 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.450804305 podStartE2EDuration="12.450804305s" podCreationTimestamp="2026-01-23 18:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:09:27.447942697 +0000 UTC m=+162.443767138" watchObservedRunningTime="2026-01-23 18:09:27.450804305 +0000 UTC m=+162.446628746" Jan 23 18:09:28 crc kubenswrapper[4688]: I0123 18:09:28.337360 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:28 crc kubenswrapper[4688]: I0123 18:09:28.337449 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:29 crc kubenswrapper[4688]: I0123 18:09:29.074023 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:29 crc kubenswrapper[4688]: I0123 18:09:29.074084 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:29 crc kubenswrapper[4688]: I0123 18:09:29.074129 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:29 crc kubenswrapper[4688]: I0123 18:09:29.074170 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:29 crc kubenswrapper[4688]: I0123 18:09:29.347327 4688 generic.go:334] "Generic (PLEG): container finished" podID="351bfca8-fc19-4257-abb5-536a92a7bd76" containerID="e0491339bf560e270deaa7b82f01dd1f87a0f9d9611b8e00340ed8cced52594a" exitCode=0 Jan 23 18:09:29 crc kubenswrapper[4688]: I0123 18:09:29.347429 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"351bfca8-fc19-4257-abb5-536a92a7bd76","Type":"ContainerDied","Data":"e0491339bf560e270deaa7b82f01dd1f87a0f9d9611b8e00340ed8cced52594a"} Jan 23 18:09:30 crc kubenswrapper[4688]: I0123 18:09:30.755480 4688 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-s7wj5 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 18:09:30 crc kubenswrapper[4688]: I0123 18:09:30.756112 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-s7wj5" podUID="71ab11f7-6719-4e2a-8993-4c7eed4d51c3" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.119674 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.300673 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/351bfca8-fc19-4257-abb5-536a92a7bd76-kubelet-dir\") pod \"351bfca8-fc19-4257-abb5-536a92a7bd76\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.300843 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/351bfca8-fc19-4257-abb5-536a92a7bd76-kube-api-access\") pod \"351bfca8-fc19-4257-abb5-536a92a7bd76\" (UID: \"351bfca8-fc19-4257-abb5-536a92a7bd76\") " Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.302579 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/351bfca8-fc19-4257-abb5-536a92a7bd76-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "351bfca8-fc19-4257-abb5-536a92a7bd76" (UID: "351bfca8-fc19-4257-abb5-536a92a7bd76"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.320011 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351bfca8-fc19-4257-abb5-536a92a7bd76-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "351bfca8-fc19-4257-abb5-536a92a7bd76" (UID: "351bfca8-fc19-4257-abb5-536a92a7bd76"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.367608 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"351bfca8-fc19-4257-abb5-536a92a7bd76","Type":"ContainerDied","Data":"0a5ca0f014d30afb1a008505e304bf1802392daa952618d6eb07fff0a14764f3"} Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.367678 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a5ca0f014d30afb1a008505e304bf1802392daa952618d6eb07fff0a14764f3" Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.367777 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.402255 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/351bfca8-fc19-4257-abb5-536a92a7bd76-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:31 crc kubenswrapper[4688]: I0123 18:09:31.402306 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/351bfca8-fc19-4257-abb5-536a92a7bd76-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.567361 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c8sk2"] Jan 23 18:09:35 crc kubenswrapper[4688]: E0123 18:09:35.568657 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351bfca8-fc19-4257-abb5-536a92a7bd76" containerName="pruner" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.568673 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="351bfca8-fc19-4257-abb5-536a92a7bd76" containerName="pruner" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.568811 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="351bfca8-fc19-4257-abb5-536a92a7bd76" containerName="pruner" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.569378 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.587544 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c8sk2"] Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738047 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7339504-c831-4a4e-9e40-710ac852e0c3-registry-certificates\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738123 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-registry-tls\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738144 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7339504-c831-4a4e-9e40-710ac852e0c3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738363 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7339504-c831-4a4e-9e40-710ac852e0c3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738402 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-bound-sa-token\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738437 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7339504-c831-4a4e-9e40-710ac852e0c3-trusted-ca\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738466 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnn9z\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-kube-api-access-mnn9z\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.738560 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.764021 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.839794 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7339504-c831-4a4e-9e40-710ac852e0c3-registry-certificates\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.839890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-registry-tls\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.839924 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7339504-c831-4a4e-9e40-710ac852e0c3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.839963 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-bound-sa-token\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.839988 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7339504-c831-4a4e-9e40-710ac852e0c3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.840021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7339504-c831-4a4e-9e40-710ac852e0c3-trusted-ca\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.840056 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnn9z\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-kube-api-access-mnn9z\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.840606 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7339504-c831-4a4e-9e40-710ac852e0c3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.841562 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7339504-c831-4a4e-9e40-710ac852e0c3-trusted-ca\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.841984 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7339504-c831-4a4e-9e40-710ac852e0c3-registry-certificates\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.847884 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-registry-tls\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.849826 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7339504-c831-4a4e-9e40-710ac852e0c3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.857739 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-bound-sa-token\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.859877 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnn9z\" (UniqueName: \"kubernetes.io/projected/e7339504-c831-4a4e-9e40-710ac852e0c3-kube-api-access-mnn9z\") pod \"image-registry-66df7c8f76-c8sk2\" (UID: \"e7339504-c831-4a4e-9e40-710ac852e0c3\") " pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:35 crc kubenswrapper[4688]: I0123 18:09:35.890135 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:36 crc kubenswrapper[4688]: I0123 18:09:36.459818 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c8sk2"] Jan 23 18:09:36 crc kubenswrapper[4688]: W0123 18:09:36.469398 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7339504_c831_4a4e_9e40_710ac852e0c3.slice/crio-a28bf334150291134309ce7b43ddd3954fcc2edb6b819736ec2cc7871c333a37 WatchSource:0}: Error finding container a28bf334150291134309ce7b43ddd3954fcc2edb6b819736ec2cc7871c333a37: Status 404 returned error can't find the container with id a28bf334150291134309ce7b43ddd3954fcc2edb6b819736ec2cc7871c333a37 Jan 23 18:09:36 crc kubenswrapper[4688]: I0123 18:09:36.966259 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:09:36 crc kubenswrapper[4688]: I0123 18:09:36.966888 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:09:37 crc kubenswrapper[4688]: I0123 18:09:37.423225 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" event={"ID":"e7339504-c831-4a4e-9e40-710ac852e0c3","Type":"ContainerStarted","Data":"5675d9eb6ec4f682fb810c8ae3d874b37bdf3a773c0bbfd24254277733ae38cc"} Jan 23 18:09:37 crc kubenswrapper[4688]: I0123 18:09:37.423296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" event={"ID":"e7339504-c831-4a4e-9e40-710ac852e0c3","Type":"ContainerStarted","Data":"a28bf334150291134309ce7b43ddd3954fcc2edb6b819736ec2cc7871c333a37"} Jan 23 18:09:38 crc kubenswrapper[4688]: I0123 18:09:38.432063 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:38 crc kubenswrapper[4688]: I0123 18:09:38.456621 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" podStartSLOduration=3.456596757 podStartE2EDuration="3.456596757s" podCreationTimestamp="2026-01-23 18:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:09:38.450686494 +0000 UTC m=+173.446510955" watchObservedRunningTime="2026-01-23 18:09:38.456596757 +0000 UTC m=+173.452421198" Jan 23 18:09:39 crc kubenswrapper[4688]: I0123 18:09:39.074202 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:39 crc kubenswrapper[4688]: I0123 18:09:39.074303 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:39 crc kubenswrapper[4688]: I0123 18:09:39.074406 4688 patch_prober.go:28] interesting pod/downloads-7954f5f757-8rxmx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 18:09:39 crc kubenswrapper[4688]: I0123 18:09:39.074494 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8rxmx" podUID="a65ef93e-9a84-4907-84e4-fcf7248bba7d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 18:09:43 crc kubenswrapper[4688]: I0123 18:09:43.795754 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c7vr8"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.677591 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gm9fn"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.698997 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n2pmt"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.708962 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5crr7"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.739524 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6db7k"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.779679 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k6fl6"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.780063 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" containerID="cri-o://c857de38ab0ea13a7d8659cebfd78e8792035140d209254629485b98fd678350" gracePeriod=30 Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.789751 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrnkb"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.802818 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8gpr"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.818576 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4gqq5"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.820044 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.822416 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4npnz"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.828287 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lh47m"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.841478 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4gqq5"] Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.851177 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9495fe1-3e6a-410d-8628-ebd588169767-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.851263 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9495fe1-3e6a-410d-8628-ebd588169767-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.851302 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24cj\" (UniqueName: \"kubernetes.io/projected/f9495fe1-3e6a-410d-8628-ebd588169767-kube-api-access-j24cj\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.952793 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9495fe1-3e6a-410d-8628-ebd588169767-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.952863 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24cj\" (UniqueName: \"kubernetes.io/projected/f9495fe1-3e6a-410d-8628-ebd588169767-kube-api-access-j24cj\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.952946 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9495fe1-3e6a-410d-8628-ebd588169767-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.954795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9495fe1-3e6a-410d-8628-ebd588169767-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.962730 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f9495fe1-3e6a-410d-8628-ebd588169767-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:46 crc kubenswrapper[4688]: I0123 18:09:46.974643 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24cj\" (UniqueName: \"kubernetes.io/projected/f9495fe1-3e6a-410d-8628-ebd588169767-kube-api-access-j24cj\") pod \"marketplace-operator-79b997595-4gqq5\" (UID: \"f9495fe1-3e6a-410d-8628-ebd588169767\") " pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:47 crc kubenswrapper[4688]: I0123 18:09:47.159216 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:48 crc kubenswrapper[4688]: I0123 18:09:48.496001 4688 generic.go:334] "Generic (PLEG): container finished" podID="48574a66-36e9-4915-a747-5ad9e653d135" containerID="c857de38ab0ea13a7d8659cebfd78e8792035140d209254629485b98fd678350" exitCode=0 Jan 23 18:09:48 crc kubenswrapper[4688]: I0123 18:09:48.496107 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" event={"ID":"48574a66-36e9-4915-a747-5ad9e653d135","Type":"ContainerDied","Data":"c857de38ab0ea13a7d8659cebfd78e8792035140d209254629485b98fd678350"} Jan 23 18:09:49 crc kubenswrapper[4688]: I0123 18:09:49.082204 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8rxmx" Jan 23 18:09:49 crc kubenswrapper[4688]: I0123 18:09:49.976032 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k6fl6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 23 18:09:49 crc kubenswrapper[4688]: I0123 18:09:49.976142 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.668636 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4gqq5"] Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.682538 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.858353 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-trusted-ca\") pod \"48574a66-36e9-4915-a747-5ad9e653d135\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.858451 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzz66\" (UniqueName: \"kubernetes.io/projected/48574a66-36e9-4915-a747-5ad9e653d135-kube-api-access-pzz66\") pod \"48574a66-36e9-4915-a747-5ad9e653d135\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.858548 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-operator-metrics\") pod \"48574a66-36e9-4915-a747-5ad9e653d135\" (UID: \"48574a66-36e9-4915-a747-5ad9e653d135\") " Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.859448 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "48574a66-36e9-4915-a747-5ad9e653d135" (UID: "48574a66-36e9-4915-a747-5ad9e653d135"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.868383 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "48574a66-36e9-4915-a747-5ad9e653d135" (UID: "48574a66-36e9-4915-a747-5ad9e653d135"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:09:51 crc kubenswrapper[4688]: I0123 18:09:51.870750 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48574a66-36e9-4915-a747-5ad9e653d135-kube-api-access-pzz66" (OuterVolumeSpecName: "kube-api-access-pzz66") pod "48574a66-36e9-4915-a747-5ad9e653d135" (UID: "48574a66-36e9-4915-a747-5ad9e653d135"). InnerVolumeSpecName "kube-api-access-pzz66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:51.961240 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:51.961293 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzz66\" (UniqueName: \"kubernetes.io/projected/48574a66-36e9-4915-a747-5ad9e653d135-kube-api-access-pzz66\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:51.961305 4688 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/48574a66-36e9-4915-a747-5ad9e653d135-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.531455 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8gpr" event={"ID":"cb419c0c-c835-40e8-a2af-166fa2c90791","Type":"ContainerStarted","Data":"584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.532261 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8gpr" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerName="extract-content" containerID="cri-o://584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.562939 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh47m" event={"ID":"32ce53aa-adb0-4e56-93b9-acf618ee0546","Type":"ContainerStarted","Data":"0d037ccd2477bf86059a2cfcd4847772156a04ab9495e89c8435e3adefa6ee80"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.563297 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lh47m" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerName="extract-content" containerID="cri-o://0d037ccd2477bf86059a2cfcd4847772156a04ab9495e89c8435e3adefa6ee80" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.566164 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gm9fn" event={"ID":"4a6f511f-28fb-4a10-bcb5-1409673fef40","Type":"ContainerStarted","Data":"ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.566484 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gm9fn" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerName="extract-content" containerID="cri-o://ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.569153 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crr7" event={"ID":"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f","Type":"ContainerStarted","Data":"528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.569332 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5crr7" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerName="extract-content" containerID="cri-o://528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.601654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" event={"ID":"f9495fe1-3e6a-410d-8628-ebd588169767","Type":"ContainerStarted","Data":"3113f4efc6d0c5c303c97b80bc05cf4b42f515af3882fcd36b6129a12cfacfa0"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.601781 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" event={"ID":"f9495fe1-3e6a-410d-8628-ebd588169767","Type":"ContainerStarted","Data":"9b86613909a0150f77ac676c23859dd1cdc19b72a1c220c490bdbb1ac17fe468"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.601805 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.605677 4688 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4gqq5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.611028 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" podUID="f9495fe1-3e6a-410d-8628-ebd588169767" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.612451 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4npnz" event={"ID":"7041a10e-482a-4225-b1c4-729d143310a5","Type":"ContainerStarted","Data":"9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.612757 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4npnz" podUID="7041a10e-482a-4225-b1c4-729d143310a5" containerName="extract-content" containerID="cri-o://9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.622849 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2pmt" event={"ID":"7c199c10-940f-4ef3-a6a9-14c611e470a1","Type":"ContainerStarted","Data":"600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.623148 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n2pmt" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerName="extract-content" containerID="cri-o://600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.635678 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" event={"ID":"48574a66-36e9-4915-a747-5ad9e653d135","Type":"ContainerDied","Data":"d2459e6c8257559cff4a563a7b4827b0c622efa9b341c445f0feff89f5dce05e"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.636280 4688 scope.go:117] "RemoveContainer" containerID="c857de38ab0ea13a7d8659cebfd78e8792035140d209254629485b98fd678350" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.636458 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.662319 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrnkb" event={"ID":"c6a2302e-9cf7-4138-9dde-67aaabe46490","Type":"ContainerStarted","Data":"7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.662594 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rrnkb" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerName="extract-content" containerID="cri-o://7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.669429 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6db7k" event={"ID":"60bcb3bd-df55-4d54-b987-e4195415f2e3","Type":"ContainerStarted","Data":"0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e"} Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.670139 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6db7k" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerName="extract-content" containerID="cri-o://0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e" gracePeriod=30 Jan 23 18:09:52 crc kubenswrapper[4688]: I0123 18:09:52.737290 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" podStartSLOduration=6.7371829 podStartE2EDuration="6.7371829s" podCreationTimestamp="2026-01-23 18:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:09:52.732112796 +0000 UTC m=+187.727937257" watchObservedRunningTime="2026-01-23 18:09:52.7371829 +0000 UTC m=+187.733007361" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.651389 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8gpr_cb419c0c-c835-40e8-a2af-166fa2c90791/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.652386 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.658222 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4npnz_7041a10e-482a-4225-b1c4-729d143310a5/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.658703 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.665557 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rrnkb_c6a2302e-9cf7-4138-9dde-67aaabe46490/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.666247 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.674472 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6db7k_60bcb3bd-df55-4d54-b987-e4195415f2e3/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.676117 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.681329 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n2pmt_7c199c10-940f-4ef3-a6a9-14c611e470a1/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.682203 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.683935 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4npnz_7041a10e-482a-4225-b1c4-729d143310a5/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.684528 4688 generic.go:334] "Generic (PLEG): container finished" podID="7041a10e-482a-4225-b1c4-729d143310a5" containerID="9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.684587 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4npnz" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.684606 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4npnz" event={"ID":"7041a10e-482a-4225-b1c4-729d143310a5","Type":"ContainerDied","Data":"9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.684743 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4npnz" event={"ID":"7041a10e-482a-4225-b1c4-729d143310a5","Type":"ContainerDied","Data":"abff22043bd8a8d3af51e33f3df6b9e5f4166c40c9b73a923f9086331dcab85f"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.684802 4688 scope.go:117] "RemoveContainer" containerID="9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.685696 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5crr7_b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.686123 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.688943 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lh47m_32ce53aa-adb0-4e56-93b9-acf618ee0546/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.689580 4688 generic.go:334] "Generic (PLEG): container finished" podID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerID="0d037ccd2477bf86059a2cfcd4847772156a04ab9495e89c8435e3adefa6ee80" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.689666 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh47m" event={"ID":"32ce53aa-adb0-4e56-93b9-acf618ee0546","Type":"ContainerDied","Data":"0d037ccd2477bf86059a2cfcd4847772156a04ab9495e89c8435e3adefa6ee80"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.689759 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lh47m" event={"ID":"32ce53aa-adb0-4e56-93b9-acf618ee0546","Type":"ContainerDied","Data":"dbf6d81a3c455ca4b5e70dccaeafd6a8135455414cf8464ab39a1ac29899980b"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.689806 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbf6d81a3c455ca4b5e70dccaeafd6a8135455414cf8464ab39a1ac29899980b" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.690536 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lh47m_32ce53aa-adb0-4e56-93b9-acf618ee0546/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.691114 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.691779 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5crr7_b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.692581 4688 generic.go:334] "Generic (PLEG): container finished" podID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerID="528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.692662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crr7" event={"ID":"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f","Type":"ContainerDied","Data":"528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.692708 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crr7" event={"ID":"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f","Type":"ContainerDied","Data":"964849483a4f7c1b46b46a1f584af82c82f42930c67831bbd15399580a0c3ea8"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.692932 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crr7" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.694945 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6db7k_60bcb3bd-df55-4d54-b987-e4195415f2e3/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.696022 4688 generic.go:334] "Generic (PLEG): container finished" podID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerID="0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.696469 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6db7k" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.696932 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6db7k" event={"ID":"60bcb3bd-df55-4d54-b987-e4195415f2e3","Type":"ContainerDied","Data":"0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.697056 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6db7k" event={"ID":"60bcb3bd-df55-4d54-b987-e4195415f2e3","Type":"ContainerDied","Data":"e3144033bd81ed0869fec200d9a41629b9d587dd77315feb6a973c92f79826c4"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.699964 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8gpr_cb419c0c-c835-40e8-a2af-166fa2c90791/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.701262 4688 generic.go:334] "Generic (PLEG): container finished" podID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerID="584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.701329 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8gpr" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.701337 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8gpr" event={"ID":"cb419c0c-c835-40e8-a2af-166fa2c90791","Type":"ContainerDied","Data":"584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.701368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8gpr" event={"ID":"cb419c0c-c835-40e8-a2af-166fa2c90791","Type":"ContainerDied","Data":"32b2c10e7b8481f14ec3450aa0e4449871a885e05787f867e076adf66486bf77"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.711472 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-n2pmt_7c199c10-940f-4ef3-a6a9-14c611e470a1/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.712356 4688 generic.go:334] "Generic (PLEG): container finished" podID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerID="600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.712458 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2pmt" event={"ID":"7c199c10-940f-4ef3-a6a9-14c611e470a1","Type":"ContainerDied","Data":"600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.712494 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n2pmt" event={"ID":"7c199c10-940f-4ef3-a6a9-14c611e470a1","Type":"ContainerDied","Data":"90f92b5316c452343f53a13f83948a2277b0a58e4d7fdf9287c0229e9e5aa434"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.712553 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gm9fn_4a6f511f-28fb-4a10-bcb5-1409673fef40/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.712561 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n2pmt" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.713121 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.721980 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gm9fn_4a6f511f-28fb-4a10-bcb5-1409673fef40/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.723505 4688 scope.go:117] "RemoveContainer" containerID="05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.723551 4688 generic.go:334] "Generic (PLEG): container finished" podID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerID="ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.723685 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gm9fn" event={"ID":"4a6f511f-28fb-4a10-bcb5-1409673fef40","Type":"ContainerDied","Data":"ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.723731 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gm9fn" event={"ID":"4a6f511f-28fb-4a10-bcb5-1409673fef40","Type":"ContainerDied","Data":"126f21c22dfd1948149513aa998c557d9c5303599112528d87fd0417afae8c1f"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727013 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r44xg\" (UniqueName: \"kubernetes.io/projected/cb419c0c-c835-40e8-a2af-166fa2c90791-kube-api-access-r44xg\") pod \"cb419c0c-c835-40e8-a2af-166fa2c90791\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727050 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-utilities\") pod \"7041a10e-482a-4225-b1c4-729d143310a5\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-catalog-content\") pod \"c6a2302e-9cf7-4138-9dde-67aaabe46490\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727126 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxksq\" (UniqueName: \"kubernetes.io/projected/32ce53aa-adb0-4e56-93b9-acf618ee0546-kube-api-access-rxksq\") pod \"32ce53aa-adb0-4e56-93b9-acf618ee0546\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727164 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-utilities\") pod \"4a6f511f-28fb-4a10-bcb5-1409673fef40\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727162 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rrnkb_c6a2302e-9cf7-4138-9dde-67aaabe46490/extract-content/0.log" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727213 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-catalog-content\") pod \"4a6f511f-28fb-4a10-bcb5-1409673fef40\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727253 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp5st\" (UniqueName: \"kubernetes.io/projected/c6a2302e-9cf7-4138-9dde-67aaabe46490-kube-api-access-zp5st\") pod \"c6a2302e-9cf7-4138-9dde-67aaabe46490\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.727705 4688 generic.go:334] "Generic (PLEG): container finished" podID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerID="7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378" exitCode=2 Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.728350 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrnkb" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.728500 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-utilities" (OuterVolumeSpecName: "utilities") pod "7041a10e-482a-4225-b1c4-729d143310a5" (UID: "7041a10e-482a-4225-b1c4-729d143310a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.728555 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrnkb" event={"ID":"c6a2302e-9cf7-4138-9dde-67aaabe46490","Type":"ContainerDied","Data":"7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.728587 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrnkb" event={"ID":"c6a2302e-9cf7-4138-9dde-67aaabe46490","Type":"ContainerDied","Data":"7f04661b133444a71cfca530837cf3a0ce5072dcdf89c2f5dd706f26f63021cc"} Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.728860 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-utilities" (OuterVolumeSpecName: "utilities") pod "4a6f511f-28fb-4a10-bcb5-1409673fef40" (UID: "4a6f511f-28fb-4a10-bcb5-1409673fef40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.735017 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4gqq5" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.735031 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb419c0c-c835-40e8-a2af-166fa2c90791-kube-api-access-r44xg" (OuterVolumeSpecName: "kube-api-access-r44xg") pod "cb419c0c-c835-40e8-a2af-166fa2c90791" (UID: "cb419c0c-c835-40e8-a2af-166fa2c90791"). InnerVolumeSpecName "kube-api-access-r44xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.735650 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ce53aa-adb0-4e56-93b9-acf618ee0546-kube-api-access-rxksq" (OuterVolumeSpecName: "kube-api-access-rxksq") pod "32ce53aa-adb0-4e56-93b9-acf618ee0546" (UID: "32ce53aa-adb0-4e56-93b9-acf618ee0546"). InnerVolumeSpecName "kube-api-access-rxksq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.736049 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a2302e-9cf7-4138-9dde-67aaabe46490-kube-api-access-zp5st" (OuterVolumeSpecName: "kube-api-access-zp5st") pod "c6a2302e-9cf7-4138-9dde-67aaabe46490" (UID: "c6a2302e-9cf7-4138-9dde-67aaabe46490"). InnerVolumeSpecName "kube-api-access-zp5st". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.746695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6a2302e-9cf7-4138-9dde-67aaabe46490" (UID: "c6a2302e-9cf7-4138-9dde-67aaabe46490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.752266 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a6f511f-28fb-4a10-bcb5-1409673fef40" (UID: "4a6f511f-28fb-4a10-bcb5-1409673fef40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.757456 4688 scope.go:117] "RemoveContainer" containerID="9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.758523 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69\": container with ID starting with 9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69 not found: ID does not exist" containerID="9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.758601 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69"} err="failed to get container status \"9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69\": rpc error: code = NotFound desc = could not find container \"9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69\": container with ID starting with 9c4865909f769c7d7744ce0888c4e233d22202a2a1c11d9e19f407cb699c9f69 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.758715 4688 scope.go:117] "RemoveContainer" containerID="05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.759548 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f\": container with ID starting with 05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f not found: ID does not exist" containerID="05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.759698 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f"} err="failed to get container status \"05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f\": rpc error: code = NotFound desc = could not find container \"05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f\": container with ID starting with 05415b71941dd7fae6ae97272a2473b440cc756adf7f1d7211a338b4ad0a460f not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.759757 4688 scope.go:117] "RemoveContainer" containerID="528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.785826 4688 scope.go:117] "RemoveContainer" containerID="d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.808984 4688 scope.go:117] "RemoveContainer" containerID="528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.809754 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168\": container with ID starting with 528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168 not found: ID does not exist" containerID="528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.809804 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168"} err="failed to get container status \"528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168\": rpc error: code = NotFound desc = could not find container \"528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168\": container with ID starting with 528cfdd64dedba5d752c89e83b4a556467c9a7753af4a17c7e08b231f0e03168 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.809839 4688 scope.go:117] "RemoveContainer" containerID="d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.810574 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52\": container with ID starting with d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52 not found: ID does not exist" containerID="d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.810612 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52"} err="failed to get container status \"d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52\": rpc error: code = NotFound desc = could not find container \"d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52\": container with ID starting with d3349b0ba95077a0bd6f26822d4446d65e4a4ff7948830b652d87393d341da52 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.810655 4688 scope.go:117] "RemoveContainer" containerID="0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.828057 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-utilities\") pod \"60bcb3bd-df55-4d54-b987-e4195415f2e3\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.828116 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-catalog-content\") pod \"7c199c10-940f-4ef3-a6a9-14c611e470a1\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.828661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-utilities\") pod \"7c199c10-940f-4ef3-a6a9-14c611e470a1\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.828692 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-catalog-content\") pod \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.828714 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sksj6\" (UniqueName: \"kubernetes.io/projected/4a6f511f-28fb-4a10-bcb5-1409673fef40-kube-api-access-sksj6\") pod \"4a6f511f-28fb-4a10-bcb5-1409673fef40\" (UID: \"4a6f511f-28fb-4a10-bcb5-1409673fef40\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.828750 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-catalog-content\") pod \"7041a10e-482a-4225-b1c4-729d143310a5\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.828769 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ddd9\" (UniqueName: \"kubernetes.io/projected/60bcb3bd-df55-4d54-b987-e4195415f2e3-kube-api-access-7ddd9\") pod \"60bcb3bd-df55-4d54-b987-e4195415f2e3\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829312 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8m8q\" (UniqueName: \"kubernetes.io/projected/7041a10e-482a-4225-b1c4-729d143310a5-kube-api-access-k8m8q\") pod \"7041a10e-482a-4225-b1c4-729d143310a5\" (UID: \"7041a10e-482a-4225-b1c4-729d143310a5\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829355 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-catalog-content\") pod \"cb419c0c-c835-40e8-a2af-166fa2c90791\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829383 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pkxv\" (UniqueName: \"kubernetes.io/projected/7c199c10-940f-4ef3-a6a9-14c611e470a1-kube-api-access-2pkxv\") pod \"7c199c10-940f-4ef3-a6a9-14c611e470a1\" (UID: \"7c199c10-940f-4ef3-a6a9-14c611e470a1\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829413 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-utilities\") pod \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829441 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-utilities\") pod \"cb419c0c-c835-40e8-a2af-166fa2c90791\" (UID: \"cb419c0c-c835-40e8-a2af-166fa2c90791\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829340 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-utilities" (OuterVolumeSpecName: "utilities") pod "60bcb3bd-df55-4d54-b987-e4195415f2e3" (UID: "60bcb3bd-df55-4d54-b987-e4195415f2e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829486 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-utilities\") pod \"c6a2302e-9cf7-4138-9dde-67aaabe46490\" (UID: \"c6a2302e-9cf7-4138-9dde-67aaabe46490\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829516 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-utilities\") pod \"32ce53aa-adb0-4e56-93b9-acf618ee0546\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829543 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-catalog-content\") pod \"32ce53aa-adb0-4e56-93b9-acf618ee0546\" (UID: \"32ce53aa-adb0-4e56-93b9-acf618ee0546\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829598 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv2c8\" (UniqueName: \"kubernetes.io/projected/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-kube-api-access-kv2c8\") pod \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\" (UID: \"b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829626 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-catalog-content\") pod \"60bcb3bd-df55-4d54-b987-e4195415f2e3\" (UID: \"60bcb3bd-df55-4d54-b987-e4195415f2e3\") " Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829773 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-utilities" (OuterVolumeSpecName: "utilities") pod "7c199c10-940f-4ef3-a6a9-14c611e470a1" (UID: "7c199c10-940f-4ef3-a6a9-14c611e470a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.829993 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.830018 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r44xg\" (UniqueName: \"kubernetes.io/projected/cb419c0c-c835-40e8-a2af-166fa2c90791-kube-api-access-r44xg\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.830034 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.830048 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.830062 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxksq\" (UniqueName: \"kubernetes.io/projected/32ce53aa-adb0-4e56-93b9-acf618ee0546-kube-api-access-rxksq\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.830078 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.830091 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a6f511f-28fb-4a10-bcb5-1409673fef40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.830105 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp5st\" (UniqueName: \"kubernetes.io/projected/c6a2302e-9cf7-4138-9dde-67aaabe46490-kube-api-access-zp5st\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.831489 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-utilities" (OuterVolumeSpecName: "utilities") pod "cb419c0c-c835-40e8-a2af-166fa2c90791" (UID: "cb419c0c-c835-40e8-a2af-166fa2c90791"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.832467 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a6f511f-28fb-4a10-bcb5-1409673fef40-kube-api-access-sksj6" (OuterVolumeSpecName: "kube-api-access-sksj6") pod "4a6f511f-28fb-4a10-bcb5-1409673fef40" (UID: "4a6f511f-28fb-4a10-bcb5-1409673fef40"). InnerVolumeSpecName "kube-api-access-sksj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.832710 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-utilities" (OuterVolumeSpecName: "utilities") pod "b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" (UID: "b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.832911 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7041a10e-482a-4225-b1c4-729d143310a5-kube-api-access-k8m8q" (OuterVolumeSpecName: "kube-api-access-k8m8q") pod "7041a10e-482a-4225-b1c4-729d143310a5" (UID: "7041a10e-482a-4225-b1c4-729d143310a5"). InnerVolumeSpecName "kube-api-access-k8m8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.833254 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-utilities" (OuterVolumeSpecName: "utilities") pod "c6a2302e-9cf7-4138-9dde-67aaabe46490" (UID: "c6a2302e-9cf7-4138-9dde-67aaabe46490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.833364 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-utilities" (OuterVolumeSpecName: "utilities") pod "32ce53aa-adb0-4e56-93b9-acf618ee0546" (UID: "32ce53aa-adb0-4e56-93b9-acf618ee0546"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.834511 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c199c10-940f-4ef3-a6a9-14c611e470a1-kube-api-access-2pkxv" (OuterVolumeSpecName: "kube-api-access-2pkxv") pod "7c199c10-940f-4ef3-a6a9-14c611e470a1" (UID: "7c199c10-940f-4ef3-a6a9-14c611e470a1"). InnerVolumeSpecName "kube-api-access-2pkxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.835260 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7041a10e-482a-4225-b1c4-729d143310a5" (UID: "7041a10e-482a-4225-b1c4-729d143310a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.836094 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60bcb3bd-df55-4d54-b987-e4195415f2e3-kube-api-access-7ddd9" (OuterVolumeSpecName: "kube-api-access-7ddd9") pod "60bcb3bd-df55-4d54-b987-e4195415f2e3" (UID: "60bcb3bd-df55-4d54-b987-e4195415f2e3"). InnerVolumeSpecName "kube-api-access-7ddd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.841583 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c199c10-940f-4ef3-a6a9-14c611e470a1" (UID: "7c199c10-940f-4ef3-a6a9-14c611e470a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.842161 4688 scope.go:117] "RemoveContainer" containerID="0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.845684 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-kube-api-access-kv2c8" (OuterVolumeSpecName: "kube-api-access-kv2c8") pod "b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" (UID: "b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f"). InnerVolumeSpecName "kube-api-access-kv2c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.846015 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ce53aa-adb0-4e56-93b9-acf618ee0546" (UID: "32ce53aa-adb0-4e56-93b9-acf618ee0546"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.852749 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb419c0c-c835-40e8-a2af-166fa2c90791" (UID: "cb419c0c-c835-40e8-a2af-166fa2c90791"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.862509 4688 scope.go:117] "RemoveContainer" containerID="0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.864214 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e\": container with ID starting with 0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e not found: ID does not exist" containerID="0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.864268 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e"} err="failed to get container status \"0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e\": rpc error: code = NotFound desc = could not find container \"0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e\": container with ID starting with 0a729b117c3e63fe8db9a72d4e895b97a0422a1b33ab25c32038a59d7668ab5e not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.864303 4688 scope.go:117] "RemoveContainer" containerID="0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.864762 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7\": container with ID starting with 0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7 not found: ID does not exist" containerID="0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.864787 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7"} err="failed to get container status \"0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7\": rpc error: code = NotFound desc = could not find container \"0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7\": container with ID starting with 0ac58a69e4c44428c63dc83766529b100cbd28d7045a111f181db43d7256c9f7 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.864803 4688 scope.go:117] "RemoveContainer" containerID="584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.867345 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" (UID: "b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.886689 4688 scope.go:117] "RemoveContainer" containerID="836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.899377 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60bcb3bd-df55-4d54-b987-e4195415f2e3" (UID: "60bcb3bd-df55-4d54-b987-e4195415f2e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.913205 4688 scope.go:117] "RemoveContainer" containerID="584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.914056 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4\": container with ID starting with 584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4 not found: ID does not exist" containerID="584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.914145 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4"} err="failed to get container status \"584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4\": rpc error: code = NotFound desc = could not find container \"584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4\": container with ID starting with 584a8ad74a8054f822e0592bb2220b8dd2852c7947cbb7cb2d3f34c9771b47d4 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.914215 4688 scope.go:117] "RemoveContainer" containerID="836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.914674 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b\": container with ID starting with 836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b not found: ID does not exist" containerID="836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.914743 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b"} err="failed to get container status \"836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b\": rpc error: code = NotFound desc = could not find container \"836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b\": container with ID starting with 836e1aaf5fd1e5c72955ee4f5283f1bbaec17a27a728cf639a33afab7e4c5e6b not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.914784 4688 scope.go:117] "RemoveContainer" containerID="600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.929699 4688 scope.go:117] "RemoveContainer" containerID="d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930609 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930645 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sksj6\" (UniqueName: \"kubernetes.io/projected/4a6f511f-28fb-4a10-bcb5-1409673fef40-kube-api-access-sksj6\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930657 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930668 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7041a10e-482a-4225-b1c4-729d143310a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930678 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ddd9\" (UniqueName: \"kubernetes.io/projected/60bcb3bd-df55-4d54-b987-e4195415f2e3-kube-api-access-7ddd9\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930687 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8m8q\" (UniqueName: \"kubernetes.io/projected/7041a10e-482a-4225-b1c4-729d143310a5-kube-api-access-k8m8q\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930695 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930705 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pkxv\" (UniqueName: \"kubernetes.io/projected/7c199c10-940f-4ef3-a6a9-14c611e470a1-kube-api-access-2pkxv\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930715 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930724 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb419c0c-c835-40e8-a2af-166fa2c90791-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930734 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a2302e-9cf7-4138-9dde-67aaabe46490-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930743 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930754 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ce53aa-adb0-4e56-93b9-acf618ee0546-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930767 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv2c8\" (UniqueName: \"kubernetes.io/projected/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f-kube-api-access-kv2c8\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930775 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60bcb3bd-df55-4d54-b987-e4195415f2e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.930783 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c199c10-940f-4ef3-a6a9-14c611e470a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.948540 4688 scope.go:117] "RemoveContainer" containerID="600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.949157 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746\": container with ID starting with 600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746 not found: ID does not exist" containerID="600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.949215 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746"} err="failed to get container status \"600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746\": rpc error: code = NotFound desc = could not find container \"600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746\": container with ID starting with 600d171d20b0897944bcc63958229643f3cc0539cf2eab5a4c1376340546c746 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.949239 4688 scope.go:117] "RemoveContainer" containerID="d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.949609 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422\": container with ID starting with d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422 not found: ID does not exist" containerID="d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.949665 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422"} err="failed to get container status \"d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422\": rpc error: code = NotFound desc = could not find container \"d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422\": container with ID starting with d071d49ed19bc68bc970292f4cdb50accd85ed92aab3e573f41222e3536aa422 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.949680 4688 scope.go:117] "RemoveContainer" containerID="ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.965656 4688 scope.go:117] "RemoveContainer" containerID="1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.983676 4688 scope.go:117] "RemoveContainer" containerID="ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.984296 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2\": container with ID starting with ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2 not found: ID does not exist" containerID="ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.984340 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2"} err="failed to get container status \"ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2\": rpc error: code = NotFound desc = could not find container \"ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2\": container with ID starting with ddd5d344ea696cfe9ae1f771e255202c4a2069fc666fcbe6d87d3670c41838a2 not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.984374 4688 scope.go:117] "RemoveContainer" containerID="1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf" Jan 23 18:09:53 crc kubenswrapper[4688]: E0123 18:09:53.984656 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf\": container with ID starting with 1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf not found: ID does not exist" containerID="1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.984689 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf"} err="failed to get container status \"1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf\": rpc error: code = NotFound desc = could not find container \"1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf\": container with ID starting with 1831f469327d74dae7c348e26de8e86424dc4040e20ba4b8394b69de217c9caf not found: ID does not exist" Jan 23 18:09:53 crc kubenswrapper[4688]: I0123 18:09:53.984718 4688 scope.go:117] "RemoveContainer" containerID="7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.004745 4688 scope.go:117] "RemoveContainer" containerID="85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.038307 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4npnz"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.038953 4688 scope.go:117] "RemoveContainer" containerID="7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378" Jan 23 18:09:54 crc kubenswrapper[4688]: E0123 18:09:54.039520 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378\": container with ID starting with 7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378 not found: ID does not exist" containerID="7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.039573 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378"} err="failed to get container status \"7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378\": rpc error: code = NotFound desc = could not find container \"7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378\": container with ID starting with 7b01d8471c859eb2ea2825322388ee944c3c94494b65dfe7b7e1adb7d7671378 not found: ID does not exist" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.039604 4688 scope.go:117] "RemoveContainer" containerID="85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62" Jan 23 18:09:54 crc kubenswrapper[4688]: E0123 18:09:54.040068 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62\": container with ID starting with 85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62 not found: ID does not exist" containerID="85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.040138 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62"} err="failed to get container status \"85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62\": rpc error: code = NotFound desc = could not find container \"85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62\": container with ID starting with 85fac82efd6cdd9dfa66c7686b85caaac3372df0a9d0e6a25acbb8793936fa62 not found: ID does not exist" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.040318 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4npnz"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.088985 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5crr7"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.105172 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5crr7"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.116698 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n2pmt"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.120970 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n2pmt"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.141567 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrnkb"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.146729 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrnkb"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.172538 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6db7k"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.175116 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6db7k"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.201467 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8gpr"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.205100 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8gpr"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.735678 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gm9fn" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.742816 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lh47m" Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.792592 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gm9fn"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.797276 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gm9fn"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.824803 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lh47m"] Jan 23 18:09:54 crc kubenswrapper[4688]: I0123 18:09:54.828473 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lh47m"] Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.375338 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" path="/var/lib/kubelet/pods/32ce53aa-adb0-4e56-93b9-acf618ee0546/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.376030 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" path="/var/lib/kubelet/pods/4a6f511f-28fb-4a10-bcb5-1409673fef40/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.376677 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" path="/var/lib/kubelet/pods/60bcb3bd-df55-4d54-b987-e4195415f2e3/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.377839 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7041a10e-482a-4225-b1c4-729d143310a5" path="/var/lib/kubelet/pods/7041a10e-482a-4225-b1c4-729d143310a5/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.378450 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" path="/var/lib/kubelet/pods/7c199c10-940f-4ef3-a6a9-14c611e470a1/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.379087 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" path="/var/lib/kubelet/pods/b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.380176 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" path="/var/lib/kubelet/pods/c6a2302e-9cf7-4138-9dde-67aaabe46490/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.380719 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" path="/var/lib/kubelet/pods/cb419c0c-c835-40e8-a2af-166fa2c90791/volumes" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.898999 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-c8sk2" Jan 23 18:09:55 crc kubenswrapper[4688]: I0123 18:09:55.958995 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6wxpp"] Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.895678 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cf85r"] Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898685 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898725 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898743 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7041a10e-482a-4225-b1c4-729d143310a5" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898757 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7041a10e-482a-4225-b1c4-729d143310a5" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898774 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898782 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898793 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7041a10e-482a-4225-b1c4-729d143310a5" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898803 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7041a10e-482a-4225-b1c4-729d143310a5" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898815 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898823 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898838 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898846 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898861 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898876 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898888 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898896 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898906 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898918 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898927 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898935 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898944 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898954 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898972 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.898980 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.898992 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899000 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.899011 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899018 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.899030 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899038 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.899048 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899056 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: E0123 18:09:56.899070 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899078 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerName="extract-utilities" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899248 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7041a10e-482a-4225-b1c4-729d143310a5" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899270 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c199c10-940f-4ef3-a6a9-14c611e470a1" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899282 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb419c0c-c835-40e8-a2af-166fa2c90791" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899290 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="60bcb3bd-df55-4d54-b987-e4195415f2e3" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899300 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a6f511f-28fb-4a10-bcb5-1409673fef40" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899313 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ce53aa-adb0-4e56-93b9-acf618ee0546" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899322 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a2302e-9cf7-4138-9dde-67aaabe46490" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899332 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="48574a66-36e9-4915-a747-5ad9e653d135" containerName="marketplace-operator" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.899342 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84bec44-eb9d-4e2c-b9f7-c6eed5b8eb6f" containerName="extract-content" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.900828 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.903954 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 18:09:56 crc kubenswrapper[4688]: I0123 18:09:56.908202 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cf85r"] Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.084020 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e81430c-65b3-4f6e-9986-8a16cbe69d67-utilities\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.084739 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e81430c-65b3-4f6e-9986-8a16cbe69d67-catalog-content\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.084945 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6xg\" (UniqueName: \"kubernetes.io/projected/4e81430c-65b3-4f6e-9986-8a16cbe69d67-kube-api-access-7q6xg\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.098329 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n6tbc"] Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.104355 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.109292 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.112889 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6tbc"] Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.186791 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e81430c-65b3-4f6e-9986-8a16cbe69d67-utilities\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.186867 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e81430c-65b3-4f6e-9986-8a16cbe69d67-catalog-content\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.186953 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6xg\" (UniqueName: \"kubernetes.io/projected/4e81430c-65b3-4f6e-9986-8a16cbe69d67-kube-api-access-7q6xg\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.187607 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e81430c-65b3-4f6e-9986-8a16cbe69d67-utilities\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.187761 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e81430c-65b3-4f6e-9986-8a16cbe69d67-catalog-content\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.209386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6xg\" (UniqueName: \"kubernetes.io/projected/4e81430c-65b3-4f6e-9986-8a16cbe69d67-kube-api-access-7q6xg\") pod \"community-operators-cf85r\" (UID: \"4e81430c-65b3-4f6e-9986-8a16cbe69d67\") " pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.224851 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.290007 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88wrm\" (UniqueName: \"kubernetes.io/projected/4506f985-1626-4ad5-b924-74cd384786a2-kube-api-access-88wrm\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.290107 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4506f985-1626-4ad5-b924-74cd384786a2-utilities\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.290150 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4506f985-1626-4ad5-b924-74cd384786a2-catalog-content\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.392243 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88wrm\" (UniqueName: \"kubernetes.io/projected/4506f985-1626-4ad5-b924-74cd384786a2-kube-api-access-88wrm\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.392779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4506f985-1626-4ad5-b924-74cd384786a2-utilities\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.392807 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4506f985-1626-4ad5-b924-74cd384786a2-catalog-content\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.393543 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4506f985-1626-4ad5-b924-74cd384786a2-catalog-content\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.393720 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4506f985-1626-4ad5-b924-74cd384786a2-utilities\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.416853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88wrm\" (UniqueName: \"kubernetes.io/projected/4506f985-1626-4ad5-b924-74cd384786a2-kube-api-access-88wrm\") pod \"redhat-marketplace-n6tbc\" (UID: \"4506f985-1626-4ad5-b924-74cd384786a2\") " pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.424093 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.655791 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cf85r"] Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.763347 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf85r" event={"ID":"4e81430c-65b3-4f6e-9986-8a16cbe69d67","Type":"ContainerStarted","Data":"8dd2c4220a490cbddd6f6f567f390059d85b7c9799e42f5f8191420cb22104dd"} Jan 23 18:09:57 crc kubenswrapper[4688]: I0123 18:09:57.898144 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n6tbc"] Jan 23 18:09:57 crc kubenswrapper[4688]: W0123 18:09:57.915641 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4506f985_1626_4ad5_b924_74cd384786a2.slice/crio-43a60ddfdd23b07550a6750cb2f8255e05a872cee9930013a4e928cbcc9461b1 WatchSource:0}: Error finding container 43a60ddfdd23b07550a6750cb2f8255e05a872cee9930013a4e928cbcc9461b1: Status 404 returned error can't find the container with id 43a60ddfdd23b07550a6750cb2f8255e05a872cee9930013a4e928cbcc9461b1 Jan 23 18:09:58 crc kubenswrapper[4688]: I0123 18:09:58.772661 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" containerID="fc1de3cce44a073b4b9a426888d6fd385b5c2dfb9e0640b8834e5ecc1e25406b" exitCode=0 Jan 23 18:09:58 crc kubenswrapper[4688]: I0123 18:09:58.772790 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf85r" event={"ID":"4e81430c-65b3-4f6e-9986-8a16cbe69d67","Type":"ContainerDied","Data":"fc1de3cce44a073b4b9a426888d6fd385b5c2dfb9e0640b8834e5ecc1e25406b"} Jan 23 18:09:58 crc kubenswrapper[4688]: I0123 18:09:58.775266 4688 generic.go:334] "Generic (PLEG): container finished" podID="4506f985-1626-4ad5-b924-74cd384786a2" containerID="709b9d0638162edb4ee2fb14f03500eb64fa992425f2b60968a639740d22dccd" exitCode=0 Jan 23 18:09:58 crc kubenswrapper[4688]: I0123 18:09:58.775314 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6tbc" event={"ID":"4506f985-1626-4ad5-b924-74cd384786a2","Type":"ContainerDied","Data":"709b9d0638162edb4ee2fb14f03500eb64fa992425f2b60968a639740d22dccd"} Jan 23 18:09:58 crc kubenswrapper[4688]: I0123 18:09:58.775338 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6tbc" event={"ID":"4506f985-1626-4ad5-b924-74cd384786a2","Type":"ContainerStarted","Data":"43a60ddfdd23b07550a6750cb2f8255e05a872cee9930013a4e928cbcc9461b1"} Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.296246 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rp974"] Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.298175 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.302277 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.317332 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rp974"] Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.432802 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b45c79-4271-40da-9245-bf36100d8d38-catalog-content\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.432872 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bv8r\" (UniqueName: \"kubernetes.io/projected/61b45c79-4271-40da-9245-bf36100d8d38-kube-api-access-2bv8r\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.432932 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b45c79-4271-40da-9245-bf36100d8d38-utilities\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.510676 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6fb8d"] Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.520055 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fb8d"] Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.520273 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.525155 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.536205 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b45c79-4271-40da-9245-bf36100d8d38-catalog-content\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.536269 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bv8r\" (UniqueName: \"kubernetes.io/projected/61b45c79-4271-40da-9245-bf36100d8d38-kube-api-access-2bv8r\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.536331 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b45c79-4271-40da-9245-bf36100d8d38-utilities\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.537030 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b45c79-4271-40da-9245-bf36100d8d38-utilities\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.537539 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b45c79-4271-40da-9245-bf36100d8d38-catalog-content\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.570270 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bv8r\" (UniqueName: \"kubernetes.io/projected/61b45c79-4271-40da-9245-bf36100d8d38-kube-api-access-2bv8r\") pod \"redhat-operators-rp974\" (UID: \"61b45c79-4271-40da-9245-bf36100d8d38\") " pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.623848 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.637291 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dq6\" (UniqueName: \"kubernetes.io/projected/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-kube-api-access-87dq6\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.637359 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-utilities\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.637425 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-catalog-content\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.741732 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87dq6\" (UniqueName: \"kubernetes.io/projected/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-kube-api-access-87dq6\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.742174 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-utilities\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.742321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-catalog-content\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.742958 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-utilities\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.742998 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-catalog-content\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.769452 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87dq6\" (UniqueName: \"kubernetes.io/projected/a7f1dd62-ed20-4c0d-8166-14ecfa42faa8-kube-api-access-87dq6\") pod \"certified-operators-6fb8d\" (UID: \"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8\") " pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.792317 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6tbc" event={"ID":"4506f985-1626-4ad5-b924-74cd384786a2","Type":"ContainerStarted","Data":"9df1392827d6a87a60ae338a6cbd5e3f19f790c1bb4bcdcf490b75f87ad4b296"} Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.797276 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf85r" event={"ID":"4e81430c-65b3-4f6e-9986-8a16cbe69d67","Type":"ContainerStarted","Data":"2fb20d676c9bc97a7da930f01386988a15d8e379e2e4581781ef7e8d91e14aa4"} Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.882761 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rp974"] Jan 23 18:09:59 crc kubenswrapper[4688]: W0123 18:09:59.890214 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b45c79_4271_40da_9245_bf36100d8d38.slice/crio-d0c7dd90450c76e890473c042fd2003859b7fda32ca391c3c0739c17c7cf704a WatchSource:0}: Error finding container d0c7dd90450c76e890473c042fd2003859b7fda32ca391c3c0739c17c7cf704a: Status 404 returned error can't find the container with id d0c7dd90450c76e890473c042fd2003859b7fda32ca391c3c0739c17c7cf704a Jan 23 18:09:59 crc kubenswrapper[4688]: I0123 18:09:59.908586 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.408628 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fb8d"] Jan 23 18:10:00 crc kubenswrapper[4688]: W0123 18:10:00.413975 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7f1dd62_ed20_4c0d_8166_14ecfa42faa8.slice/crio-7ad23f4b97216c995ac0fa1c9b081a1837f49dd51a87e3f33384812e99c47879 WatchSource:0}: Error finding container 7ad23f4b97216c995ac0fa1c9b081a1837f49dd51a87e3f33384812e99c47879: Status 404 returned error can't find the container with id 7ad23f4b97216c995ac0fa1c9b081a1837f49dd51a87e3f33384812e99c47879 Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.806092 4688 generic.go:334] "Generic (PLEG): container finished" podID="61b45c79-4271-40da-9245-bf36100d8d38" containerID="162acd969e4fd4dbb4a4cc3e84df1f5f7e7336d1862695326e9362d62f175692" exitCode=0 Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.806275 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp974" event={"ID":"61b45c79-4271-40da-9245-bf36100d8d38","Type":"ContainerDied","Data":"162acd969e4fd4dbb4a4cc3e84df1f5f7e7336d1862695326e9362d62f175692"} Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.806318 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp974" event={"ID":"61b45c79-4271-40da-9245-bf36100d8d38","Type":"ContainerStarted","Data":"d0c7dd90450c76e890473c042fd2003859b7fda32ca391c3c0739c17c7cf704a"} Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.808392 4688 generic.go:334] "Generic (PLEG): container finished" podID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" containerID="8b4ce3c730e8254ae66f991e09382aa7b15f72b5787c023319826c53d16df263" exitCode=0 Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.808561 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fb8d" event={"ID":"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8","Type":"ContainerDied","Data":"8b4ce3c730e8254ae66f991e09382aa7b15f72b5787c023319826c53d16df263"} Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.808629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fb8d" event={"ID":"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8","Type":"ContainerStarted","Data":"7ad23f4b97216c995ac0fa1c9b081a1837f49dd51a87e3f33384812e99c47879"} Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.811831 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" containerID="2fb20d676c9bc97a7da930f01386988a15d8e379e2e4581781ef7e8d91e14aa4" exitCode=0 Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.811910 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf85r" event={"ID":"4e81430c-65b3-4f6e-9986-8a16cbe69d67","Type":"ContainerDied","Data":"2fb20d676c9bc97a7da930f01386988a15d8e379e2e4581781ef7e8d91e14aa4"} Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.817808 4688 generic.go:334] "Generic (PLEG): container finished" podID="4506f985-1626-4ad5-b924-74cd384786a2" containerID="9df1392827d6a87a60ae338a6cbd5e3f19f790c1bb4bcdcf490b75f87ad4b296" exitCode=0 Jan 23 18:10:00 crc kubenswrapper[4688]: I0123 18:10:00.817869 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6tbc" event={"ID":"4506f985-1626-4ad5-b924-74cd384786a2","Type":"ContainerDied","Data":"9df1392827d6a87a60ae338a6cbd5e3f19f790c1bb4bcdcf490b75f87ad4b296"} Jan 23 18:10:01 crc kubenswrapper[4688]: I0123 18:10:01.832305 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cf85r" event={"ID":"4e81430c-65b3-4f6e-9986-8a16cbe69d67","Type":"ContainerStarted","Data":"ee96d6c1119ae253d52394195eb154f67d894618328a93d81c74330270624045"} Jan 23 18:10:01 crc kubenswrapper[4688]: I0123 18:10:01.839427 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n6tbc" event={"ID":"4506f985-1626-4ad5-b924-74cd384786a2","Type":"ContainerStarted","Data":"19ed2e71aaa74a871102fef1c669a4f4fc6920a715086408e9af598694803791"} Jan 23 18:10:01 crc kubenswrapper[4688]: I0123 18:10:01.843169 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fb8d" event={"ID":"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8","Type":"ContainerStarted","Data":"22946f46c17f0c961381524b9573c297e3b3b66bed6d0ebd7967343226c3de44"} Jan 23 18:10:01 crc kubenswrapper[4688]: I0123 18:10:01.859962 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cf85r" podStartSLOduration=3.1538837060000002 podStartE2EDuration="5.859936122s" podCreationTimestamp="2026-01-23 18:09:56 +0000 UTC" firstStartedPulling="2026-01-23 18:09:58.775307715 +0000 UTC m=+193.771132156" lastFinishedPulling="2026-01-23 18:10:01.481360131 +0000 UTC m=+196.477184572" observedRunningTime="2026-01-23 18:10:01.857384245 +0000 UTC m=+196.853208706" watchObservedRunningTime="2026-01-23 18:10:01.859936122 +0000 UTC m=+196.855760563" Jan 23 18:10:01 crc kubenswrapper[4688]: I0123 18:10:01.887694 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n6tbc" podStartSLOduration=2.060886165 podStartE2EDuration="4.887662741s" podCreationTimestamp="2026-01-23 18:09:57 +0000 UTC" firstStartedPulling="2026-01-23 18:09:58.776959402 +0000 UTC m=+193.772783843" lastFinishedPulling="2026-01-23 18:10:01.603735978 +0000 UTC m=+196.599560419" observedRunningTime="2026-01-23 18:10:01.88413437 +0000 UTC m=+196.879958811" watchObservedRunningTime="2026-01-23 18:10:01.887662741 +0000 UTC m=+196.883487182" Jan 23 18:10:02 crc kubenswrapper[4688]: I0123 18:10:02.855821 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp974" event={"ID":"61b45c79-4271-40da-9245-bf36100d8d38","Type":"ContainerStarted","Data":"5e1e295ff983835248ec1dcfbd1dcb538ca579270e3d718f7d763390859144e3"} Jan 23 18:10:03 crc kubenswrapper[4688]: I0123 18:10:03.870441 4688 generic.go:334] "Generic (PLEG): container finished" podID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" containerID="22946f46c17f0c961381524b9573c297e3b3b66bed6d0ebd7967343226c3de44" exitCode=0 Jan 23 18:10:03 crc kubenswrapper[4688]: I0123 18:10:03.870506 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fb8d" event={"ID":"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8","Type":"ContainerDied","Data":"22946f46c17f0c961381524b9573c297e3b3b66bed6d0ebd7967343226c3de44"} Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.476914 4688 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.477763 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477" gracePeriod=15 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.477975 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987" gracePeriod=15 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.478021 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6" gracePeriod=15 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.478067 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1" gracePeriod=15 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.478101 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194" gracePeriod=15 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479079 4688 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479476 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479490 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479502 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479508 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479519 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479529 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479536 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479543 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479551 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479557 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479565 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479572 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479591 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479596 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479717 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479729 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479742 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479752 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479760 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479767 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.479878 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479884 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.479987 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.481320 4688 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.481878 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.557517 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.608769 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.639334 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.639471 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.640273 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.640367 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.640426 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.640551 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.640600 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.640638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: E0123 18:10:04.667148 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.213:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-6fb8d.188d6e92a09fa42c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-6fb8d,UID:a7f1dd62-ed20-4c0d-8166-14ecfa42faa8,APIVersion:v1,ResourceVersion:29513,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 18:10:04.665570348 +0000 UTC m=+199.661394819,LastTimestamp:2026-01-23 18:10:04.665570348 +0000 UTC m=+199.661394819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.742630 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.742723 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.742766 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.742786 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.742822 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.742879 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.742983 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743057 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743061 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743279 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743335 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743623 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743723 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743756 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.743802 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.903447 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.911665 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fb8d" event={"ID":"a7f1dd62-ed20-4c0d-8166-14ecfa42faa8","Type":"ContainerStarted","Data":"e53f60cd07114c1fa86d73051c14d7dd179f8c98a5db1062617a60cb844ac9da"} Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.912770 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.913069 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.937410 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.941365 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.944314 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987" exitCode=0 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.944376 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6" exitCode=0 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.944390 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1" exitCode=0 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.944401 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194" exitCode=2 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.944499 4688 scope.go:117] "RemoveContainer" containerID="5b5961813166d26b2e42fc3759179cfa36df5acbbe4f2a2bd7014d648cabcfbe" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.960239 4688 generic.go:334] "Generic (PLEG): container finished" podID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" containerID="4f130187ee9da4ae735fec7d3bec708ae929cd499421820fbb7281593dee982f" exitCode=0 Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.960393 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7ff053fa-a174-4323-a28d-6e8173d1c8b7","Type":"ContainerDied","Data":"4f130187ee9da4ae735fec7d3bec708ae929cd499421820fbb7281593dee982f"} Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.962474 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.963323 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:04 crc kubenswrapper[4688]: I0123 18:10:04.963850 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.359493 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.360730 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.361385 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.718066 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.718911 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.719608 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.720614 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.721240 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: E0123 18:10:05.944685 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.213:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-6fb8d.188d6e92a09fa42c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-6fb8d,UID:a7f1dd62-ed20-4c0d-8166-14ecfa42faa8,APIVersion:v1,ResourceVersion:29513,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 18:10:04.665570348 +0000 UTC m=+199.661394819,LastTimestamp:2026-01-23 18:10:04.665570348 +0000 UTC m=+199.661394819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.971975 4688 generic.go:334] "Generic (PLEG): container finished" podID="61b45c79-4271-40da-9245-bf36100d8d38" containerID="5e1e295ff983835248ec1dcfbd1dcb538ca579270e3d718f7d763390859144e3" exitCode=0 Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.972068 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp974" event={"ID":"61b45c79-4271-40da-9245-bf36100d8d38","Type":"ContainerDied","Data":"5e1e295ff983835248ec1dcfbd1dcb538ca579270e3d718f7d763390859144e3"} Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.973257 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.973544 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.974054 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.974783 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.975164 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.977974 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.984255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4"} Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.984345 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"886764a7798e5bfb3fe94933c01ccfa66d6f0393f4565b54ef49f1020dc3ecf4"} Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.984609 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.985515 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.986346 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.986835 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:05 crc kubenswrapper[4688]: I0123 18:10:05.987302 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.324446 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.325486 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.325932 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.327647 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.328740 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.329598 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.476392 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kubelet-dir\") pod \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.476514 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kube-api-access\") pod \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.476572 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-var-lock\") pod \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\" (UID: \"7ff053fa-a174-4323-a28d-6e8173d1c8b7\") " Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.476610 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-var-lock" (OuterVolumeSpecName: "var-lock") pod "7ff053fa-a174-4323-a28d-6e8173d1c8b7" (UID: "7ff053fa-a174-4323-a28d-6e8173d1c8b7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.476683 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ff053fa-a174-4323-a28d-6e8173d1c8b7" (UID: "7ff053fa-a174-4323-a28d-6e8173d1c8b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.476970 4688 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.476997 4688 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.497828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ff053fa-a174-4323-a28d-6e8173d1c8b7" (UID: "7ff053fa-a174-4323-a28d-6e8173d1c8b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.578603 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ff053fa-a174-4323-a28d-6e8173d1c8b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.965480 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.966170 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.967329 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.968735 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.970441 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.970945 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.971748 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.972122 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.972436 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.972676 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:06 crc kubenswrapper[4688]: I0123 18:10:06.995897 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp974" event={"ID":"61b45c79-4271-40da-9245-bf36100d8d38","Type":"ContainerStarted","Data":"5d98fabbf30d509637b2277b5f0b2b0dc4eb87d38761ce45710a19aa81660ab4"} Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.003204 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.003608 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.005017 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.005388 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.005641 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.005671 4688 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477" exitCode=0 Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.005781 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.005851 4688 scope.go:117] "RemoveContainer" containerID="9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.006051 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.006656 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.010033 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.010238 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7ff053fa-a174-4323-a28d-6e8173d1c8b7","Type":"ContainerDied","Data":"eb6906d434d11c3976d393190ec79abf6f45a7aa69a535685c92427aaa58b792"} Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.010303 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6906d434d11c3976d393190ec79abf6f45a7aa69a535685c92427aaa58b792" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.025540 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.025997 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.026651 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.026889 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.027075 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.027279 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.027629 4688 scope.go:117] "RemoveContainer" containerID="75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.044844 4688 scope.go:117] "RemoveContainer" containerID="0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.079260 4688 scope.go:117] "RemoveContainer" containerID="8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.087146 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.087301 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.087338 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.087503 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.087589 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.087657 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.089308 4688 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.089348 4688 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.089360 4688 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.097087 4688 scope.go:117] "RemoveContainer" containerID="9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.116298 4688 scope.go:117] "RemoveContainer" containerID="4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.143217 4688 scope.go:117] "RemoveContainer" containerID="9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987" Jan 23 18:10:07 crc kubenswrapper[4688]: E0123 18:10:07.143747 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\": container with ID starting with 9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987 not found: ID does not exist" containerID="9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.143788 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987"} err="failed to get container status \"9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\": rpc error: code = NotFound desc = could not find container \"9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987\": container with ID starting with 9847c7afd20bc59cf7c764f39ce208c7061654fb450f05711610f47eee05b987 not found: ID does not exist" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.143820 4688 scope.go:117] "RemoveContainer" containerID="75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6" Jan 23 18:10:07 crc kubenswrapper[4688]: E0123 18:10:07.144302 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\": container with ID starting with 75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6 not found: ID does not exist" containerID="75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.144330 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6"} err="failed to get container status \"75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\": rpc error: code = NotFound desc = could not find container \"75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6\": container with ID starting with 75648357a2f81f0085f95f464aa4e56a1b62acc643f75ac78a00cbdc4fbfd7d6 not found: ID does not exist" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.144344 4688 scope.go:117] "RemoveContainer" containerID="0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1" Jan 23 18:10:07 crc kubenswrapper[4688]: E0123 18:10:07.144641 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\": container with ID starting with 0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1 not found: ID does not exist" containerID="0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.144664 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1"} err="failed to get container status \"0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\": rpc error: code = NotFound desc = could not find container \"0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1\": container with ID starting with 0e2c6bd1e74655cfca4aa279ee54a0ab5c44217deedc890fc8818fcce3b22ab1 not found: ID does not exist" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.144677 4688 scope.go:117] "RemoveContainer" containerID="8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194" Jan 23 18:10:07 crc kubenswrapper[4688]: E0123 18:10:07.144946 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\": container with ID starting with 8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194 not found: ID does not exist" containerID="8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.144978 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194"} err="failed to get container status \"8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\": rpc error: code = NotFound desc = could not find container \"8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194\": container with ID starting with 8c17062818e5064fd3135df6a7eab9240fdc9c2bd1555237c4656f8963a15194 not found: ID does not exist" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.145010 4688 scope.go:117] "RemoveContainer" containerID="9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477" Jan 23 18:10:07 crc kubenswrapper[4688]: E0123 18:10:07.145281 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\": container with ID starting with 9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477 not found: ID does not exist" containerID="9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.145299 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477"} err="failed to get container status \"9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\": rpc error: code = NotFound desc = could not find container \"9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477\": container with ID starting with 9b8a6507e68641b8d8d50519a147de971f3206d6de6c5dd50eb3a293bdc17477 not found: ID does not exist" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.145313 4688 scope.go:117] "RemoveContainer" containerID="4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772" Jan 23 18:10:07 crc kubenswrapper[4688]: E0123 18:10:07.145627 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\": container with ID starting with 4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772 not found: ID does not exist" containerID="4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.145824 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772"} err="failed to get container status \"4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\": rpc error: code = NotFound desc = could not find container \"4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772\": container with ID starting with 4ff59fcb4ee2e0417ecbeb1a4543402f48f8c16c97124f49380a87ecd9741772 not found: ID does not exist" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.226727 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.226816 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.325640 4688 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.325942 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.326256 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.326445 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.326618 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.326786 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.368157 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.370464 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.371153 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.371845 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.372028 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.372255 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.372578 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.372883 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.425505 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.425584 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.474865 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.476045 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.476775 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.477151 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.477486 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.477844 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.478215 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:07 crc kubenswrapper[4688]: I0123 18:10:07.478545 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.071997 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n6tbc" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.072511 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cf85r" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.072513 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.072837 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.073075 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.073605 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.074634 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.075042 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.075369 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.075624 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.075917 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.076264 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.076658 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.077008 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.078213 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.079592 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:08 crc kubenswrapper[4688]: I0123 18:10:08.840562 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" containerName="oauth-openshift" containerID="cri-o://5f401395323b3483e48895cd8d5dc22e44620b4c2c9172ceb717d21912959837" gracePeriod=15 Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.625613 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.626139 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.909524 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.909590 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.959119 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.959727 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.960133 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.960579 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.960927 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.961244 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.961537 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:09 crc kubenswrapper[4688]: I0123 18:10:09.961856 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.084390 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6fb8d" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.085171 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.085457 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.087016 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.087518 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.087819 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.088174 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.088543 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:10 crc kubenswrapper[4688]: I0123 18:10:10.669572 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rp974" podUID="61b45c79-4271-40da-9245-bf36100d8d38" containerName="registry-server" probeResult="failure" output=< Jan 23 18:10:10 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 18:10:10 crc kubenswrapper[4688]: > Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.050124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" event={"ID":"23f88ea9-d4bc-4702-8561-0babb8fe52df","Type":"ContainerDied","Data":"5f401395323b3483e48895cd8d5dc22e44620b4c2c9172ceb717d21912959837"} Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.050310 4688 generic.go:334] "Generic (PLEG): container finished" podID="23f88ea9-d4bc-4702-8561-0babb8fe52df" containerID="5f401395323b3483e48895cd8d5dc22e44620b4c2c9172ceb717d21912959837" exitCode=0 Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.456147 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.457603 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.457900 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.458235 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.458491 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.459068 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.459371 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.459749 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.460294 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: E0123 18:10:11.539862 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: E0123 18:10:11.540480 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: E0123 18:10:11.541277 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: E0123 18:10:11.541581 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: E0123 18:10:11.541887 4688 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.541946 4688 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 18:10:11 crc kubenswrapper[4688]: E0123 18:10:11.542222 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="200ms" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564128 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-provider-selection\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564253 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-trusted-ca-bundle\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564305 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-session\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564375 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-serving-cert\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564415 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-router-certs\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564451 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-idp-0-file-data\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564480 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqvjh\" (UniqueName: \"kubernetes.io/projected/23f88ea9-d4bc-4702-8561-0babb8fe52df-kube-api-access-cqvjh\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564516 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-error\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564560 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-service-ca\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564589 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-ocp-branding-template\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564620 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-cliconfig\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564746 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-dir\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564814 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-login\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.564874 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-policies\") pod \"23f88ea9-d4bc-4702-8561-0babb8fe52df\" (UID: \"23f88ea9-d4bc-4702-8561-0babb8fe52df\") " Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.565140 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567212 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567316 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567521 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567540 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567804 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567834 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567846 4688 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567859 4688 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.567871 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.574454 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.575237 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.575255 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f88ea9-d4bc-4702-8561-0babb8fe52df-kube-api-access-cqvjh" (OuterVolumeSpecName: "kube-api-access-cqvjh") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "kube-api-access-cqvjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.575821 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.576244 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.576507 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.576707 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.577391 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.577580 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "23f88ea9-d4bc-4702-8561-0babb8fe52df" (UID: "23f88ea9-d4bc-4702-8561-0babb8fe52df"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669587 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669631 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669649 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669663 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669677 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqvjh\" (UniqueName: \"kubernetes.io/projected/23f88ea9-d4bc-4702-8561-0babb8fe52df-kube-api-access-cqvjh\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669689 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669703 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669716 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: I0123 18:10:11.669729 4688 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/23f88ea9-d4bc-4702-8561-0babb8fe52df-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:11 crc kubenswrapper[4688]: E0123 18:10:11.743401 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="400ms" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.059431 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" event={"ID":"23f88ea9-d4bc-4702-8561-0babb8fe52df","Type":"ContainerDied","Data":"a3dd9eca58137ccc024d1504b296ed0ec0929446646aea26c4d897864cb3cb8b"} Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.059516 4688 scope.go:117] "RemoveContainer" containerID="5f401395323b3483e48895cd8d5dc22e44620b4c2c9172ceb717d21912959837" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.059852 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.061286 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.062015 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.062449 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.062927 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.063428 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.063672 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.070312 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.070547 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.075431 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.075743 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.076622 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.077717 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.078029 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.078358 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.078689 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: I0123 18:10:12.078975 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:12 crc kubenswrapper[4688]: E0123 18:10:12.145132 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="800ms" Jan 23 18:10:12 crc kubenswrapper[4688]: E0123 18:10:12.946227 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="1.6s" Jan 23 18:10:14 crc kubenswrapper[4688]: E0123 18:10:14.547642 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="3.2s" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.360057 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.360851 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.361286 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.361716 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.362159 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.362521 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.362795 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: I0123 18:10:15.363054 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:15 crc kubenswrapper[4688]: E0123 18:10:15.946306 4688 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.213:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-6fb8d.188d6e92a09fa42c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-6fb8d,UID:a7f1dd62-ed20-4c0d-8166-14ecfa42faa8,APIVersion:v1,ResourceVersion:29513,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 18:10:04.665570348 +0000 UTC m=+199.661394819,LastTimestamp:2026-01-23 18:10:04.665570348 +0000 UTC m=+199.661394819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.357629 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.360908 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.361341 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.361657 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.361962 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.362834 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.363520 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.363765 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.364722 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.386881 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.386933 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:16 crc kubenswrapper[4688]: E0123 18:10:16.387834 4688 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:16 crc kubenswrapper[4688]: I0123 18:10:16.388871 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:16 crc kubenswrapper[4688]: W0123 18:10:16.433298 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2140da878ba01584cf7d2a6456d2fe368a6fa894a1ce1483807fddc4a0631457 WatchSource:0}: Error finding container 2140da878ba01584cf7d2a6456d2fe368a6fa894a1ce1483807fddc4a0631457: Status 404 returned error can't find the container with id 2140da878ba01584cf7d2a6456d2fe368a6fa894a1ce1483807fddc4a0631457 Jan 23 18:10:17 crc kubenswrapper[4688]: I0123 18:10:17.095899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2140da878ba01584cf7d2a6456d2fe368a6fa894a1ce1483807fddc4a0631457"} Jan 23 18:10:17 crc kubenswrapper[4688]: E0123 18:10:17.749247 4688 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.213:6443: connect: connection refused" interval="6.4s" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.686843 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.687913 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.689627 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.690294 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.690913 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.691441 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.691944 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.692429 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.692793 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.735412 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rp974" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.736387 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.736836 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.737490 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.738462 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.738962 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.739467 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.739820 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:19 crc kubenswrapper[4688]: I0123 18:10:19.741411 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.119040 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.119568 4688 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc" exitCode=1 Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.119697 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc"} Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.120489 4688 scope.go:117] "RemoveContainer" containerID="a8e509dd2635709dca998d7f5a601a323819682a768fb6d3cfaee5480de5e7fc" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.121125 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.123095 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.123911 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.124327 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.124504 4688 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="453ad9e3665bc74e137c746494910b7720254708181ea6767122c063a1d7feca" exitCode=0 Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.124593 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.124616 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"453ad9e3665bc74e137c746494910b7720254708181ea6767122c063a1d7feca"} Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.124901 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.125177 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.125216 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.125238 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.125447 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.126086 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: E0123 18:10:20.126273 4688 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.126585 4688 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.126923 4688 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.127249 4688 status_manager.go:851] "Failed to get status for pod" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.127563 4688 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.127891 4688 status_manager.go:851] "Failed to get status for pod" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" pod="openshift-authentication/oauth-openshift-558db77b4-c7vr8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c7vr8\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.128345 4688 status_manager.go:851] "Failed to get status for pod" podUID="61b45c79-4271-40da-9245-bf36100d8d38" pod="openshift-marketplace/redhat-operators-rp974" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rp974\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.128727 4688 status_manager.go:851] "Failed to get status for pod" podUID="4506f985-1626-4ad5-b924-74cd384786a2" pod="openshift-marketplace/redhat-marketplace-n6tbc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-n6tbc\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.128924 4688 status_manager.go:851] "Failed to get status for pod" podUID="a7f1dd62-ed20-4c0d-8166-14ecfa42faa8" pod="openshift-marketplace/certified-operators-6fb8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-6fb8d\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:20 crc kubenswrapper[4688]: I0123 18:10:20.129161 4688 status_manager.go:851] "Failed to get status for pod" podUID="4e81430c-65b3-4f6e-9986-8a16cbe69d67" pod="openshift-marketplace/community-operators-cf85r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-cf85r\": dial tcp 38.129.56.213:6443: connect: connection refused" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.006865 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" podUID="41670363-2317-44f9-82cf-e459e23cc97e" containerName="registry" containerID="cri-o://dc17538b81dfefcc659404261fcd2ea5b7e31c598971de6e968a20abb5d38a70" gracePeriod=30 Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.134043 4688 generic.go:334] "Generic (PLEG): container finished" podID="41670363-2317-44f9-82cf-e459e23cc97e" containerID="dc17538b81dfefcc659404261fcd2ea5b7e31c598971de6e968a20abb5d38a70" exitCode=0 Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.134133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" event={"ID":"41670363-2317-44f9-82cf-e459e23cc97e","Type":"ContainerDied","Data":"dc17538b81dfefcc659404261fcd2ea5b7e31c598971de6e968a20abb5d38a70"} Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.152454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef14397f3792d264474589a6649438d3e21829ee7aec38b13661a3fbc68d6532"} Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.152563 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"228adc35a05d74b4d05881217d2d8354e35a626ce54146b965f0ff64a73ec417"} Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.152581 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5273602b8020a546ae2c9d871ff2886934c09ac2b180f8c50619d880fd0e8734"} Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.168094 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.168611 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"06a37a223dbb099c227c0205b044a0134d74dcb89b378b4b6e61bfe6c9774f56"} Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.486561 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649086 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41670363-2317-44f9-82cf-e459e23cc97e-ca-trust-extracted\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649223 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7tgn\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-kube-api-access-q7tgn\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649256 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-bound-sa-token\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649292 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41670363-2317-44f9-82cf-e459e23cc97e-installation-pull-secrets\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649323 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-registry-certificates\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649377 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-trusted-ca\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-registry-tls\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.649688 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"41670363-2317-44f9-82cf-e459e23cc97e\" (UID: \"41670363-2317-44f9-82cf-e459e23cc97e\") " Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.650453 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.650479 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.656759 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41670363-2317-44f9-82cf-e459e23cc97e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.677484 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.679693 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.680488 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-kube-api-access-q7tgn" (OuterVolumeSpecName: "kube-api-access-q7tgn") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "kube-api-access-q7tgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.688446 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.689411 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41670363-2317-44f9-82cf-e459e23cc97e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "41670363-2317-44f9-82cf-e459e23cc97e" (UID: "41670363-2317-44f9-82cf-e459e23cc97e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.751645 4688 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/41670363-2317-44f9-82cf-e459e23cc97e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.751708 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7tgn\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-kube-api-access-q7tgn\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.751721 4688 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.751731 4688 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/41670363-2317-44f9-82cf-e459e23cc97e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.751740 4688 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.751750 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41670363-2317-44f9-82cf-e459e23cc97e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:21 crc kubenswrapper[4688]: I0123 18:10:21.751760 4688 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/41670363-2317-44f9-82cf-e459e23cc97e-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.178104 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.178142 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6wxpp" event={"ID":"41670363-2317-44f9-82cf-e459e23cc97e","Type":"ContainerDied","Data":"b56fb7a463a097b5a30fe8bdbc04b135340b65f66df1701af7209c37a8b1d270"} Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.178781 4688 scope.go:117] "RemoveContainer" containerID="dc17538b81dfefcc659404261fcd2ea5b7e31c598971de6e968a20abb5d38a70" Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.183433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ab1bfe868434427db5ea522ba71c7ac873a20070775b487c2c34b1aeaca0bf48"} Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.183531 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f9564f39973ad1178f20c856cc7ca290053ff3e730ee2f915d71aaf42c459277"} Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.183653 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.183803 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.183855 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:22 crc kubenswrapper[4688]: I0123 18:10:22.660703 4688 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod48574a66-36e9-4915-a747-5ad9e653d135"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod48574a66-36e9-4915-a747-5ad9e653d135] : Timed out while waiting for systemd to remove kubepods-burstable-pod48574a66_36e9_4915_a747_5ad9e653d135.slice" Jan 23 18:10:22 crc kubenswrapper[4688]: E0123 18:10:22.660818 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod48574a66-36e9-4915-a747-5ad9e653d135] : unable to destroy cgroup paths for cgroup [kubepods burstable pod48574a66-36e9-4915-a747-5ad9e653d135] : Timed out while waiting for systemd to remove kubepods-burstable-pod48574a66_36e9_4915_a747_5ad9e653d135.slice" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" podUID="48574a66-36e9-4915-a747-5ad9e653d135" Jan 23 18:10:23 crc kubenswrapper[4688]: I0123 18:10:23.191040 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k6fl6" Jan 23 18:10:24 crc kubenswrapper[4688]: I0123 18:10:24.798145 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:10:25 crc kubenswrapper[4688]: I0123 18:10:25.582100 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:10:25 crc kubenswrapper[4688]: I0123 18:10:25.587326 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:10:26 crc kubenswrapper[4688]: I0123 18:10:26.389248 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:26 crc kubenswrapper[4688]: I0123 18:10:26.389327 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:26 crc kubenswrapper[4688]: I0123 18:10:26.395570 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:27 crc kubenswrapper[4688]: I0123 18:10:27.203474 4688 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:27 crc kubenswrapper[4688]: I0123 18:10:27.330363 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e9f625e6-1625-4b8d-ba9f-48b11b3fb1e8" Jan 23 18:10:28 crc kubenswrapper[4688]: I0123 18:10:28.230099 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:28 crc kubenswrapper[4688]: I0123 18:10:28.230675 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:28 crc kubenswrapper[4688]: I0123 18:10:28.237062 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:28 crc kubenswrapper[4688]: I0123 18:10:28.237125 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e9f625e6-1625-4b8d-ba9f-48b11b3fb1e8" Jan 23 18:10:29 crc kubenswrapper[4688]: I0123 18:10:29.235699 4688 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:29 crc kubenswrapper[4688]: I0123 18:10:29.235743 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="dc60235a-56ea-4b78-aec3-486ba53382dc" Jan 23 18:10:29 crc kubenswrapper[4688]: I0123 18:10:29.240257 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e9f625e6-1625-4b8d-ba9f-48b11b3fb1e8" Jan 23 18:10:34 crc kubenswrapper[4688]: I0123 18:10:34.804030 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 18:10:36 crc kubenswrapper[4688]: I0123 18:10:36.756759 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 18:10:36 crc kubenswrapper[4688]: I0123 18:10:36.965060 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:10:36 crc kubenswrapper[4688]: I0123 18:10:36.965157 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:10:36 crc kubenswrapper[4688]: I0123 18:10:36.965254 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:10:36 crc kubenswrapper[4688]: I0123 18:10:36.966493 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:10:36 crc kubenswrapper[4688]: I0123 18:10:36.966570 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490" gracePeriod=600 Jan 23 18:10:37 crc kubenswrapper[4688]: I0123 18:10:37.829884 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 18:10:37 crc kubenswrapper[4688]: I0123 18:10:37.881306 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 18:10:37 crc kubenswrapper[4688]: I0123 18:10:37.898812 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 18:10:38 crc kubenswrapper[4688]: I0123 18:10:38.089079 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:10:38 crc kubenswrapper[4688]: I0123 18:10:38.137659 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 18:10:38 crc kubenswrapper[4688]: I0123 18:10:38.584026 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 18:10:38 crc kubenswrapper[4688]: I0123 18:10:38.731796 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 18:10:38 crc kubenswrapper[4688]: I0123 18:10:38.738887 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 18:10:38 crc kubenswrapper[4688]: I0123 18:10:38.781467 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 18:10:38 crc kubenswrapper[4688]: I0123 18:10:38.957380 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.047492 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.101258 4688 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.118361 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.205974 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.246128 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.302971 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.309381 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490" exitCode=0 Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.309433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490"} Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.309463 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"ad9c21b368ee92434601444d389b9dec44412e1e582cc02198cf51f288c6f04f"} Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.465524 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.505410 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 18:10:39 crc kubenswrapper[4688]: I0123 18:10:39.692877 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.018044 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.020703 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.119376 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.139695 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.157448 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.175001 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.179786 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.189618 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.407105 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.489269 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.538331 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.576874 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.601295 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.619517 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.679258 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.707125 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.735103 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.823954 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.845848 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.846646 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.895571 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.896643 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 18:10:40 crc kubenswrapper[4688]: I0123 18:10:40.996829 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.026979 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.040776 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.072913 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.254179 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.274983 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.316287 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.408135 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.412144 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.523486 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.594772 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.607774 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.682380 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.849268 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 18:10:41 crc kubenswrapper[4688]: I0123 18:10:41.913846 4688 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.001159 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.032911 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.034569 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.104332 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.162795 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.174465 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.291960 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.292163 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.373740 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.462541 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.518615 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.535992 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.540587 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.578515 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.620472 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.626172 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.636534 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.660618 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.794127 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.808093 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.895906 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.913401 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:10:42 crc kubenswrapper[4688]: I0123 18:10:42.929788 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.003599 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.039113 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.187600 4688 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.245618 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.249874 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.419909 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.427080 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.531994 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.625003 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.668814 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.744106 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.746800 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.761832 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.847789 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.873101 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.874118 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 18:10:43 crc kubenswrapper[4688]: I0123 18:10:43.937361 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.061739 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.064690 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.071010 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.089941 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.154857 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.184427 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.201608 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.214509 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.226083 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.236675 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.350161 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.465258 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.528283 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.743650 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.772316 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.936977 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 18:10:44 crc kubenswrapper[4688]: I0123 18:10:44.944175 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.041104 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.082599 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.100787 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.155102 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.236117 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.256028 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.395761 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.443799 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.462368 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.485499 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.649832 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.824917 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 18:10:45 crc kubenswrapper[4688]: I0123 18:10:45.841065 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.021678 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.026879 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.084885 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.140319 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.181606 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.184054 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.234072 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.243812 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.245163 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.267239 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.365046 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.438219 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.454371 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.497579 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.616089 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.635064 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.635936 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.653746 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.685759 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.705839 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.711738 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.789386 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.805069 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.808878 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.842176 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.865704 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.914568 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.936452 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.948897 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.951715 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 18:10:46 crc kubenswrapper[4688]: I0123 18:10:46.981277 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.014876 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.094103 4688 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.095174 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.095154518 podStartE2EDuration="43.095154518s" podCreationTimestamp="2026-01-23 18:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:10:27.073588194 +0000 UTC m=+222.069412645" watchObservedRunningTime="2026-01-23 18:10:47.095154518 +0000 UTC m=+242.090978959" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.095394 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6fb8d" podStartSLOduration=44.526188982 podStartE2EDuration="48.095390286s" podCreationTimestamp="2026-01-23 18:09:59 +0000 UTC" firstStartedPulling="2026-01-23 18:10:00.811502755 +0000 UTC m=+195.807327196" lastFinishedPulling="2026-01-23 18:10:04.380704069 +0000 UTC m=+199.376528500" observedRunningTime="2026-01-23 18:10:27.192880688 +0000 UTC m=+222.188705129" watchObservedRunningTime="2026-01-23 18:10:47.095390286 +0000 UTC m=+242.091214727" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.096957 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rp974" podStartSLOduration=42.422720441 podStartE2EDuration="48.096946376s" podCreationTimestamp="2026-01-23 18:09:59 +0000 UTC" firstStartedPulling="2026-01-23 18:10:00.808109979 +0000 UTC m=+195.803934420" lastFinishedPulling="2026-01-23 18:10:06.482335904 +0000 UTC m=+201.478160355" observedRunningTime="2026-01-23 18:10:27.169443939 +0000 UTC m=+222.165268380" watchObservedRunningTime="2026-01-23 18:10:47.096946376 +0000 UTC m=+242.092770817" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.099651 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6wxpp","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/marketplace-operator-79b997595-k6fl6","openshift-authentication/oauth-openshift-558db77b4-c7vr8"] Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.099727 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.104477 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.126719 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.12669456 podStartE2EDuration="20.12669456s" podCreationTimestamp="2026-01-23 18:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:10:47.121375618 +0000 UTC m=+242.117200069" watchObservedRunningTime="2026-01-23 18:10:47.12669456 +0000 UTC m=+242.122519001" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.152608 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.184095 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.206547 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.232369 4688 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.272730 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.363932 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" path="/var/lib/kubelet/pods/23f88ea9-d4bc-4702-8561-0babb8fe52df/volumes" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.365016 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41670363-2317-44f9-82cf-e459e23cc97e" path="/var/lib/kubelet/pods/41670363-2317-44f9-82cf-e459e23cc97e/volumes" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.365755 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48574a66-36e9-4915-a747-5ad9e653d135" path="/var/lib/kubelet/pods/48574a66-36e9-4915-a747-5ad9e653d135/volumes" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.391528 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.451548 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.514126 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.639435 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.724163 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.788942 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.824234 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.861926 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 18:10:47 crc kubenswrapper[4688]: I0123 18:10:47.926654 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.001000 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.079287 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.093330 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.127115 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.332398 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.521000 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.573006 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.588254 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.688831 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.828800 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.880875 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 18:10:48 crc kubenswrapper[4688]: I0123 18:10:48.927077 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.016561 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.018956 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.143640 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.167800 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.211854 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.255083 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.415019 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.480958 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.527307 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.644948 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.646346 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.674262 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.728859 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.760892 4688 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.761324 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4" gracePeriod=5 Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.866854 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 18:10:49 crc kubenswrapper[4688]: I0123 18:10:49.939755 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.276666 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.277647 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.330543 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.415208 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.522597 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.627394 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.628153 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.638519 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.644034 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.684100 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.696356 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.718756 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 18:10:50 crc kubenswrapper[4688]: I0123 18:10:50.831889 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.118697 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.205779 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.255898 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.361811 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.415786 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.471114 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.471234 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.494221 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 18:10:51 crc kubenswrapper[4688]: I0123 18:10:51.931976 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:10:52 crc kubenswrapper[4688]: I0123 18:10:52.070281 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 18:10:52 crc kubenswrapper[4688]: I0123 18:10:52.413165 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 18:10:52 crc kubenswrapper[4688]: I0123 18:10:52.748895 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 18:10:52 crc kubenswrapper[4688]: I0123 18:10:52.756629 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 18:10:52 crc kubenswrapper[4688]: I0123 18:10:52.812417 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 18:10:52 crc kubenswrapper[4688]: I0123 18:10:52.847472 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.077299 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.183898 4688 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.245881 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.397818 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.501934 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm"] Jan 23 18:10:53 crc kubenswrapper[4688]: E0123 18:10:53.502299 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41670363-2317-44f9-82cf-e459e23cc97e" containerName="registry" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502318 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="41670363-2317-44f9-82cf-e459e23cc97e" containerName="registry" Jan 23 18:10:53 crc kubenswrapper[4688]: E0123 18:10:53.502338 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" containerName="installer" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502348 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" containerName="installer" Jan 23 18:10:53 crc kubenswrapper[4688]: E0123 18:10:53.502371 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502383 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 18:10:53 crc kubenswrapper[4688]: E0123 18:10:53.502401 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" containerName="oauth-openshift" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502410 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" containerName="oauth-openshift" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502541 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff053fa-a174-4323-a28d-6e8173d1c8b7" containerName="installer" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502559 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f88ea9-d4bc-4702-8561-0babb8fe52df" containerName="oauth-openshift" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502574 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.502588 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="41670363-2317-44f9-82cf-e459e23cc97e" containerName="registry" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.503272 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.507589 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.507604 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.508013 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.508333 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.508819 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.509084 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.509669 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.509996 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.510053 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.511538 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.511550 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.512965 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.524418 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.526480 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.532765 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm"] Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.533918 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.536976 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537050 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537079 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-audit-policies\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537103 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5gdk\" (UniqueName: \"kubernetes.io/projected/95527da2-1460-49f3-aca4-32679613ebd5-kube-api-access-s5gdk\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537124 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95527da2-1460-49f3-aca4-32679613ebd5-audit-dir\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537149 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537169 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537210 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-login\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537233 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-error\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537254 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537275 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537318 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-session\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.537338 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.638950 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639031 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-audit-policies\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639074 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5gdk\" (UniqueName: \"kubernetes.io/projected/95527da2-1460-49f3-aca4-32679613ebd5-kube-api-access-s5gdk\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639116 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95527da2-1460-49f3-aca4-32679613ebd5-audit-dir\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639155 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639224 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639269 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-login\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639317 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-error\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639400 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639475 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-session\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639522 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639560 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.639596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.642024 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/95527da2-1460-49f3-aca4-32679613ebd5-audit-dir\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.642683 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.643086 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.643201 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-audit-policies\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.643433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.649861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.652168 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.652385 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-login\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.652835 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.653323 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.653881 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.662157 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5gdk\" (UniqueName: \"kubernetes.io/projected/95527da2-1460-49f3-aca4-32679613ebd5-kube-api-access-s5gdk\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.663149 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-system-session\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.663870 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/95527da2-1460-49f3-aca4-32679613ebd5-v4-0-config-user-template-error\") pod \"oauth-openshift-5fff7d8cf9-q4bxm\" (UID: \"95527da2-1460-49f3-aca4-32679613ebd5\") " pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:53 crc kubenswrapper[4688]: I0123 18:10:53.823535 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:54 crc kubenswrapper[4688]: I0123 18:10:54.054236 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm"] Jan 23 18:10:54 crc kubenswrapper[4688]: I0123 18:10:54.409562 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" event={"ID":"95527da2-1460-49f3-aca4-32679613ebd5","Type":"ContainerStarted","Data":"24cb1d8f4d37af36b3f84d64a7be28a74a75a0bcebf7403bcc11b9e394eb63e2"} Jan 23 18:10:54 crc kubenswrapper[4688]: I0123 18:10:54.409654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" event={"ID":"95527da2-1460-49f3-aca4-32679613ebd5","Type":"ContainerStarted","Data":"225ca38f356525711562293ac72b3f2001508564cbea1e0ca31de6f62dfe1e9f"} Jan 23 18:10:54 crc kubenswrapper[4688]: I0123 18:10:54.410577 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:54 crc kubenswrapper[4688]: I0123 18:10:54.774060 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" Jan 23 18:10:54 crc kubenswrapper[4688]: I0123 18:10:54.806066 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5fff7d8cf9-q4bxm" podStartSLOduration=71.806035272 podStartE2EDuration="1m11.806035272s" podCreationTimestamp="2026-01-23 18:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:10:54.43894747 +0000 UTC m=+249.434771921" watchObservedRunningTime="2026-01-23 18:10:54.806035272 +0000 UTC m=+249.801859723" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.353928 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.354456 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.364981 4688 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377162 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377211 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377242 4688 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e066409b-eab8-4e18-a4cf-c3a1f01fb46d" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377289 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377357 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377381 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377406 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377669 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377713 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377738 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.377764 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.378148 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.378204 4688 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e066409b-eab8-4e18-a4cf-c3a1f01fb46d" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.386760 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.418577 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.418657 4688 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4" exitCode=137 Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.418760 4688 scope.go:117] "RemoveContainer" containerID="2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.418786 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.438971 4688 scope.go:117] "RemoveContainer" containerID="2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4" Jan 23 18:10:55 crc kubenswrapper[4688]: E0123 18:10:55.439789 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4\": container with ID starting with 2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4 not found: ID does not exist" containerID="2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.439848 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4"} err="failed to get container status \"2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4\": rpc error: code = NotFound desc = could not find container \"2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4\": container with ID starting with 2b3c8e29f140540b37f1ae8543f258fbfbf5536edb9ece7730cbb6de5cbad7b4 not found: ID does not exist" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.480044 4688 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.480094 4688 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.480107 4688 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.480120 4688 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:55 crc kubenswrapper[4688]: I0123 18:10:55.480135 4688 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 18:10:57 crc kubenswrapper[4688]: I0123 18:10:57.377681 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 18:11:03 crc kubenswrapper[4688]: I0123 18:11:03.493340 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 18:11:08 crc kubenswrapper[4688]: I0123 18:11:08.000512 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 18:11:19 crc kubenswrapper[4688]: I0123 18:11:19.753970 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 18:11:22 crc kubenswrapper[4688]: I0123 18:11:22.669178 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hqlrn"] Jan 23 18:11:22 crc kubenswrapper[4688]: I0123 18:11:22.670265 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" podUID="40379f1a-aa94-41f2-aeb2-de63f0c78d68" containerName="controller-manager" containerID="cri-o://d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b" gracePeriod=30 Jan 23 18:11:22 crc kubenswrapper[4688]: I0123 18:11:22.765828 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq"] Jan 23 18:11:22 crc kubenswrapper[4688]: I0123 18:11:22.766049 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" podUID="21f38108-a9e5-4b3e-84a6-ad3e5152b1be" containerName="route-controller-manager" containerID="cri-o://4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d" gracePeriod=30 Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.093619 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.199939 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.227805 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-proxy-ca-bundles\") pod \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.227899 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kssfq\" (UniqueName: \"kubernetes.io/projected/40379f1a-aa94-41f2-aeb2-de63f0c78d68-kube-api-access-kssfq\") pod \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.227973 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-config\") pod \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.228244 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-client-ca\") pod \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.228268 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-client-ca\") pod \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.228291 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40379f1a-aa94-41f2-aeb2-de63f0c78d68-serving-cert\") pod \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\" (UID: \"40379f1a-aa94-41f2-aeb2-de63f0c78d68\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.228386 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-config\") pod \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.228946 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "40379f1a-aa94-41f2-aeb2-de63f0c78d68" (UID: "40379f1a-aa94-41f2-aeb2-de63f0c78d68"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.229878 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-client-ca" (OuterVolumeSpecName: "client-ca") pod "21f38108-a9e5-4b3e-84a6-ad3e5152b1be" (UID: "21f38108-a9e5-4b3e-84a6-ad3e5152b1be"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.229962 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-client-ca" (OuterVolumeSpecName: "client-ca") pod "40379f1a-aa94-41f2-aeb2-de63f0c78d68" (UID: "40379f1a-aa94-41f2-aeb2-de63f0c78d68"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.229977 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-config" (OuterVolumeSpecName: "config") pod "21f38108-a9e5-4b3e-84a6-ad3e5152b1be" (UID: "21f38108-a9e5-4b3e-84a6-ad3e5152b1be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.230537 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-config" (OuterVolumeSpecName: "config") pod "40379f1a-aa94-41f2-aeb2-de63f0c78d68" (UID: "40379f1a-aa94-41f2-aeb2-de63f0c78d68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.237607 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40379f1a-aa94-41f2-aeb2-de63f0c78d68-kube-api-access-kssfq" (OuterVolumeSpecName: "kube-api-access-kssfq") pod "40379f1a-aa94-41f2-aeb2-de63f0c78d68" (UID: "40379f1a-aa94-41f2-aeb2-de63f0c78d68"). InnerVolumeSpecName "kube-api-access-kssfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.238463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40379f1a-aa94-41f2-aeb2-de63f0c78d68-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "40379f1a-aa94-41f2-aeb2-de63f0c78d68" (UID: "40379f1a-aa94-41f2-aeb2-de63f0c78d68"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.329967 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cncq4\" (UniqueName: \"kubernetes.io/projected/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-kube-api-access-cncq4\") pod \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330111 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-serving-cert\") pod \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\" (UID: \"21f38108-a9e5-4b3e-84a6-ad3e5152b1be\") " Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330510 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330541 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330559 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kssfq\" (UniqueName: \"kubernetes.io/projected/40379f1a-aa94-41f2-aeb2-de63f0c78d68-kube-api-access-kssfq\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330573 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330584 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330598 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/40379f1a-aa94-41f2-aeb2-de63f0c78d68-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.330607 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40379f1a-aa94-41f2-aeb2-de63f0c78d68-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.336009 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "21f38108-a9e5-4b3e-84a6-ad3e5152b1be" (UID: "21f38108-a9e5-4b3e-84a6-ad3e5152b1be"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.336083 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-kube-api-access-cncq4" (OuterVolumeSpecName: "kube-api-access-cncq4") pod "21f38108-a9e5-4b3e-84a6-ad3e5152b1be" (UID: "21f38108-a9e5-4b3e-84a6-ad3e5152b1be"). InnerVolumeSpecName "kube-api-access-cncq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.432218 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cncq4\" (UniqueName: \"kubernetes.io/projected/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-kube-api-access-cncq4\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.432272 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21f38108-a9e5-4b3e-84a6-ad3e5152b1be-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.604262 4688 generic.go:334] "Generic (PLEG): container finished" podID="40379f1a-aa94-41f2-aeb2-de63f0c78d68" containerID="d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b" exitCode=0 Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.604325 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" event={"ID":"40379f1a-aa94-41f2-aeb2-de63f0c78d68","Type":"ContainerDied","Data":"d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b"} Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.604381 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" event={"ID":"40379f1a-aa94-41f2-aeb2-de63f0c78d68","Type":"ContainerDied","Data":"b60b771de6fe0860615cf061d99d1159410993df07058f905a58e803ac8a19d3"} Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.604390 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hqlrn" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.604409 4688 scope.go:117] "RemoveContainer" containerID="d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.605872 4688 generic.go:334] "Generic (PLEG): container finished" podID="21f38108-a9e5-4b3e-84a6-ad3e5152b1be" containerID="4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d" exitCode=0 Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.605895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" event={"ID":"21f38108-a9e5-4b3e-84a6-ad3e5152b1be","Type":"ContainerDied","Data":"4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d"} Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.605910 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" event={"ID":"21f38108-a9e5-4b3e-84a6-ad3e5152b1be","Type":"ContainerDied","Data":"3d95a80a8d1edbd623b1b23069a70fa095b55eeb742c36066eb4fff67e23111d"} Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.605948 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.625536 4688 scope.go:117] "RemoveContainer" containerID="d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b" Jan 23 18:11:23 crc kubenswrapper[4688]: E0123 18:11:23.626242 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b\": container with ID starting with d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b not found: ID does not exist" containerID="d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.626289 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b"} err="failed to get container status \"d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b\": rpc error: code = NotFound desc = could not find container \"d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b\": container with ID starting with d0a6e7472991205e3901c376399b21190ea736e61a75709a27294a75ce37864b not found: ID does not exist" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.626324 4688 scope.go:117] "RemoveContainer" containerID="4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.634206 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hqlrn"] Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.639785 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hqlrn"] Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.641215 4688 scope.go:117] "RemoveContainer" containerID="4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d" Jan 23 18:11:23 crc kubenswrapper[4688]: E0123 18:11:23.642461 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d\": container with ID starting with 4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d not found: ID does not exist" containerID="4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.642531 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d"} err="failed to get container status \"4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d\": rpc error: code = NotFound desc = could not find container \"4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d\": container with ID starting with 4f55d035247836454c3862eb10c8b299e64ea96fd1646ea97fa19477134fc78d not found: ID does not exist" Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.644677 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq"] Jan 23 18:11:23 crc kubenswrapper[4688]: I0123 18:11:23.648286 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-knmkq"] Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.230862 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.380362 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64b46b78c6-kwcjx"] Jan 23 18:11:24 crc kubenswrapper[4688]: E0123 18:11:24.380890 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f38108-a9e5-4b3e-84a6-ad3e5152b1be" containerName="route-controller-manager" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.380912 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f38108-a9e5-4b3e-84a6-ad3e5152b1be" containerName="route-controller-manager" Jan 23 18:11:24 crc kubenswrapper[4688]: E0123 18:11:24.380940 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40379f1a-aa94-41f2-aeb2-de63f0c78d68" containerName="controller-manager" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.380950 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="40379f1a-aa94-41f2-aeb2-de63f0c78d68" containerName="controller-manager" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.381100 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f38108-a9e5-4b3e-84a6-ad3e5152b1be" containerName="route-controller-manager" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.381119 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="40379f1a-aa94-41f2-aeb2-de63f0c78d68" containerName="controller-manager" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.381842 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.386021 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk"] Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.387127 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.388503 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.388982 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.389286 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.392594 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk"] Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.396228 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.405722 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.406389 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.406633 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.406979 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.407143 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.407320 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.407486 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.407581 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.431373 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.436921 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64b46b78c6-kwcjx"] Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.550735 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-proxy-ca-bundles\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.550867 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpffx\" (UniqueName: \"kubernetes.io/projected/720d7f54-97f8-46c0-8184-3d6466ef8e8a-kube-api-access-kpffx\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.550919 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-client-ca\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.550949 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-serving-cert\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.550972 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/720d7f54-97f8-46c0-8184-3d6466ef8e8a-serving-cert\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.550999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-client-ca\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.551040 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-config\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.551085 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-config\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.551122 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzb7g\" (UniqueName: \"kubernetes.io/projected/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-kube-api-access-fzb7g\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.652709 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-proxy-ca-bundles\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.653278 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpffx\" (UniqueName: \"kubernetes.io/projected/720d7f54-97f8-46c0-8184-3d6466ef8e8a-kube-api-access-kpffx\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.653422 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-client-ca\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.653572 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-serving-cert\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.653682 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/720d7f54-97f8-46c0-8184-3d6466ef8e8a-serving-cert\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.653798 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-client-ca\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.653946 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-config\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.654207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-config\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.654362 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzb7g\" (UniqueName: \"kubernetes.io/projected/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-kube-api-access-fzb7g\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.654812 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-client-ca\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.655047 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-client-ca\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.655403 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-config\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.655501 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-config\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.656097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-proxy-ca-bundles\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.669568 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-serving-cert\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.670068 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/720d7f54-97f8-46c0-8184-3d6466ef8e8a-serving-cert\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.674012 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpffx\" (UniqueName: \"kubernetes.io/projected/720d7f54-97f8-46c0-8184-3d6466ef8e8a-kube-api-access-kpffx\") pod \"route-controller-manager-7d9f8584dd-t66lk\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.680141 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzb7g\" (UniqueName: \"kubernetes.io/projected/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-kube-api-access-fzb7g\") pod \"controller-manager-64b46b78c6-kwcjx\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.718998 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:24 crc kubenswrapper[4688]: I0123 18:11:24.732911 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.029343 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64b46b78c6-kwcjx"] Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.074999 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk"] Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.376970 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f38108-a9e5-4b3e-84a6-ad3e5152b1be" path="/var/lib/kubelet/pods/21f38108-a9e5-4b3e-84a6-ad3e5152b1be/volumes" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.377865 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40379f1a-aa94-41f2-aeb2-de63f0c78d68" path="/var/lib/kubelet/pods/40379f1a-aa94-41f2-aeb2-de63f0c78d68/volumes" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.622832 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" event={"ID":"720d7f54-97f8-46c0-8184-3d6466ef8e8a","Type":"ContainerStarted","Data":"90b655581eef14ac0d4ac02db325a00c36dcdf3de3a2f575cca11d8ff5955d47"} Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.622892 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" event={"ID":"720d7f54-97f8-46c0-8184-3d6466ef8e8a","Type":"ContainerStarted","Data":"05625ab5a4f0e54429e8986abeca3da4f068988c11979b49e91c31cf23b5a4b4"} Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.624320 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.625385 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" event={"ID":"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1","Type":"ContainerStarted","Data":"ed59b91c2c1aeaa17118ad0a41a245d2e429832c95001a0a60321dd66aa19fa7"} Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.625412 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" event={"ID":"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1","Type":"ContainerStarted","Data":"c33729ad3e33b87bc53e3f298e30c1df0e7897984e08ce7cb048b5fa4f05a9c0"} Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.625972 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.636968 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.637951 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.872147 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" podStartSLOduration=3.87210103 podStartE2EDuration="3.87210103s" podCreationTimestamp="2026-01-23 18:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:11:25.871549943 +0000 UTC m=+280.867374394" watchObservedRunningTime="2026-01-23 18:11:25.87210103 +0000 UTC m=+280.867925471" Jan 23 18:11:25 crc kubenswrapper[4688]: I0123 18:11:25.929812 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" podStartSLOduration=3.92978364 podStartE2EDuration="3.92978364s" podCreationTimestamp="2026-01-23 18:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:11:25.90342701 +0000 UTC m=+280.899251451" watchObservedRunningTime="2026-01-23 18:11:25.92978364 +0000 UTC m=+280.925608081" Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.608012 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64b46b78c6-kwcjx"] Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.609212 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" podUID="5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" containerName="controller-manager" containerID="cri-o://ed59b91c2c1aeaa17118ad0a41a245d2e429832c95001a0a60321dd66aa19fa7" gracePeriod=30 Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.632581 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk"] Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.633020 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" podUID="720d7f54-97f8-46c0-8184-3d6466ef8e8a" containerName="route-controller-manager" containerID="cri-o://90b655581eef14ac0d4ac02db325a00c36dcdf3de3a2f575cca11d8ff5955d47" gracePeriod=30 Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.800537 4688 generic.go:334] "Generic (PLEG): container finished" podID="720d7f54-97f8-46c0-8184-3d6466ef8e8a" containerID="90b655581eef14ac0d4ac02db325a00c36dcdf3de3a2f575cca11d8ff5955d47" exitCode=0 Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.800624 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" event={"ID":"720d7f54-97f8-46c0-8184-3d6466ef8e8a","Type":"ContainerDied","Data":"90b655581eef14ac0d4ac02db325a00c36dcdf3de3a2f575cca11d8ff5955d47"} Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.809576 4688 generic.go:334] "Generic (PLEG): container finished" podID="5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" containerID="ed59b91c2c1aeaa17118ad0a41a245d2e429832c95001a0a60321dd66aa19fa7" exitCode=0 Jan 23 18:11:30 crc kubenswrapper[4688]: I0123 18:11:30.809649 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" event={"ID":"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1","Type":"ContainerDied","Data":"ed59b91c2c1aeaa17118ad0a41a245d2e429832c95001a0a60321dd66aa19fa7"} Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.136611 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.227930 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.239142 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/720d7f54-97f8-46c0-8184-3d6466ef8e8a-serving-cert\") pod \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.239418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-client-ca\") pod \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.239570 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpffx\" (UniqueName: \"kubernetes.io/projected/720d7f54-97f8-46c0-8184-3d6466ef8e8a-kube-api-access-kpffx\") pod \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.239639 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-config\") pod \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\" (UID: \"720d7f54-97f8-46c0-8184-3d6466ef8e8a\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.242314 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-client-ca" (OuterVolumeSpecName: "client-ca") pod "720d7f54-97f8-46c0-8184-3d6466ef8e8a" (UID: "720d7f54-97f8-46c0-8184-3d6466ef8e8a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.242590 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-config" (OuterVolumeSpecName: "config") pod "720d7f54-97f8-46c0-8184-3d6466ef8e8a" (UID: "720d7f54-97f8-46c0-8184-3d6466ef8e8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.247164 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/720d7f54-97f8-46c0-8184-3d6466ef8e8a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "720d7f54-97f8-46c0-8184-3d6466ef8e8a" (UID: "720d7f54-97f8-46c0-8184-3d6466ef8e8a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.247346 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/720d7f54-97f8-46c0-8184-3d6466ef8e8a-kube-api-access-kpffx" (OuterVolumeSpecName: "kube-api-access-kpffx") pod "720d7f54-97f8-46c0-8184-3d6466ef8e8a" (UID: "720d7f54-97f8-46c0-8184-3d6466ef8e8a"). InnerVolumeSpecName "kube-api-access-kpffx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.341476 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-serving-cert\") pod \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.341596 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-config\") pod \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.341678 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-proxy-ca-bundles\") pod \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.341732 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzb7g\" (UniqueName: \"kubernetes.io/projected/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-kube-api-access-fzb7g\") pod \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.341860 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-client-ca\") pod \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\" (UID: \"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1\") " Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.342247 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.342273 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpffx\" (UniqueName: \"kubernetes.io/projected/720d7f54-97f8-46c0-8184-3d6466ef8e8a-kube-api-access-kpffx\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.342285 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/720d7f54-97f8-46c0-8184-3d6466ef8e8a-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.342297 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/720d7f54-97f8-46c0-8184-3d6466ef8e8a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.342997 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-client-ca" (OuterVolumeSpecName: "client-ca") pod "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" (UID: "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.343040 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-config" (OuterVolumeSpecName: "config") pod "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" (UID: "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.344720 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" (UID: "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.346342 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" (UID: "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.347103 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-kube-api-access-fzb7g" (OuterVolumeSpecName: "kube-api-access-fzb7g") pod "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" (UID: "5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1"). InnerVolumeSpecName "kube-api-access-fzb7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.443673 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.444085 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.444163 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.444286 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzb7g\" (UniqueName: \"kubernetes.io/projected/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-kube-api-access-fzb7g\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.444377 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.817782 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" event={"ID":"720d7f54-97f8-46c0-8184-3d6466ef8e8a","Type":"ContainerDied","Data":"05625ab5a4f0e54429e8986abeca3da4f068988c11979b49e91c31cf23b5a4b4"} Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.819366 4688 scope.go:117] "RemoveContainer" containerID="90b655581eef14ac0d4ac02db325a00c36dcdf3de3a2f575cca11d8ff5955d47" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.819311 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.820883 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" event={"ID":"5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1","Type":"ContainerDied","Data":"c33729ad3e33b87bc53e3f298e30c1df0e7897984e08ce7cb048b5fa4f05a9c0"} Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.821030 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64b46b78c6-kwcjx" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.838753 4688 scope.go:117] "RemoveContainer" containerID="ed59b91c2c1aeaa17118ad0a41a245d2e429832c95001a0a60321dd66aa19fa7" Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.853754 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64b46b78c6-kwcjx"] Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.861907 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64b46b78c6-kwcjx"] Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.867788 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk"] Jan 23 18:11:31 crc kubenswrapper[4688]: I0123 18:11:31.874035 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d9f8584dd-t66lk"] Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.387864 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-667c46fcf7-r4wx6"] Jan 23 18:11:32 crc kubenswrapper[4688]: E0123 18:11:32.388363 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" containerName="controller-manager" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.388384 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" containerName="controller-manager" Jan 23 18:11:32 crc kubenswrapper[4688]: E0123 18:11:32.388412 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="720d7f54-97f8-46c0-8184-3d6466ef8e8a" containerName="route-controller-manager" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.388421 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="720d7f54-97f8-46c0-8184-3d6466ef8e8a" containerName="route-controller-manager" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.388576 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="720d7f54-97f8-46c0-8184-3d6466ef8e8a" containerName="route-controller-manager" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.388594 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" containerName="controller-manager" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.389259 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.393567 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.394269 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.394353 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.394484 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.394571 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.395784 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76"] Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.397584 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.400223 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.400630 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.401371 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.401535 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.401834 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.401905 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.402498 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-667c46fcf7-r4wx6"] Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.403677 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.406162 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76"] Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.413067 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563488 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-client-ca\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563570 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jq2h\" (UniqueName: \"kubernetes.io/projected/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-kube-api-access-7jq2h\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563615 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-proxy-ca-bundles\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563650 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-serving-cert\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563723 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-config\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563877 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-client-ca\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563944 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9429907b-3c4b-4759-87ff-91149d62812d-serving-cert\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.563975 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h65x9\" (UniqueName: \"kubernetes.io/projected/9429907b-3c4b-4759-87ff-91149d62812d-kube-api-access-h65x9\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.564013 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-config\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665160 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-client-ca\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665272 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9429907b-3c4b-4759-87ff-91149d62812d-serving-cert\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665309 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h65x9\" (UniqueName: \"kubernetes.io/projected/9429907b-3c4b-4759-87ff-91149d62812d-kube-api-access-h65x9\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665350 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-config\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665391 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-client-ca\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665418 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jq2h\" (UniqueName: \"kubernetes.io/projected/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-kube-api-access-7jq2h\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665436 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-proxy-ca-bundles\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665460 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-serving-cert\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.665489 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-config\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.666769 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-config\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.667053 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-client-ca\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.667839 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-config\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.668166 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-client-ca\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.668309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-proxy-ca-bundles\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.671130 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-serving-cert\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.674925 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9429907b-3c4b-4759-87ff-91149d62812d-serving-cert\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.685591 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jq2h\" (UniqueName: \"kubernetes.io/projected/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-kube-api-access-7jq2h\") pod \"route-controller-manager-6cf858fd97-xql76\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.686108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h65x9\" (UniqueName: \"kubernetes.io/projected/9429907b-3c4b-4759-87ff-91149d62812d-kube-api-access-h65x9\") pod \"controller-manager-667c46fcf7-r4wx6\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.714999 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.724507 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:32 crc kubenswrapper[4688]: I0123 18:11:32.959793 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-667c46fcf7-r4wx6"] Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.000422 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76"] Jan 23 18:11:33 crc kubenswrapper[4688]: W0123 18:11:33.010107 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a45e36d_9529_4c9f_b92e_894c57ef9f5f.slice/crio-6e6c332e2298f1483086b67ead19c3d23d42e1ad76a7a7bf8b6ed7311b432331 WatchSource:0}: Error finding container 6e6c332e2298f1483086b67ead19c3d23d42e1ad76a7a7bf8b6ed7311b432331: Status 404 returned error can't find the container with id 6e6c332e2298f1483086b67ead19c3d23d42e1ad76a7a7bf8b6ed7311b432331 Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.367129 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1" path="/var/lib/kubelet/pods/5592ddbd-a333-4ea8-bccf-8b7c56fa2ea1/volumes" Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.368070 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="720d7f54-97f8-46c0-8184-3d6466ef8e8a" path="/var/lib/kubelet/pods/720d7f54-97f8-46c0-8184-3d6466ef8e8a/volumes" Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.868031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" event={"ID":"2a45e36d-9529-4c9f-b92e-894c57ef9f5f","Type":"ContainerStarted","Data":"f8a61777a0117c7f1d0e7edf90db25d264d5a838a690189fd5d9ef4a6136b3c7"} Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.868512 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" event={"ID":"2a45e36d-9529-4c9f-b92e-894c57ef9f5f","Type":"ContainerStarted","Data":"6e6c332e2298f1483086b67ead19c3d23d42e1ad76a7a7bf8b6ed7311b432331"} Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.868536 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.869552 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" event={"ID":"9429907b-3c4b-4759-87ff-91149d62812d","Type":"ContainerStarted","Data":"e357da98badeeec9c8eadcb6bb4cc7d8677ade8a54583ca878fde507f05dc7b5"} Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.869607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" event={"ID":"9429907b-3c4b-4759-87ff-91149d62812d","Type":"ContainerStarted","Data":"3d77fa86168a3aeb0c8ae231222ce67783cdd49fc911bbad40f4fc0326e0e5f5"} Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.869856 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.875024 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.876321 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.895281 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" podStartSLOduration=3.895246721 podStartE2EDuration="3.895246721s" podCreationTimestamp="2026-01-23 18:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:11:33.887777584 +0000 UTC m=+288.883602035" watchObservedRunningTime="2026-01-23 18:11:33.895246721 +0000 UTC m=+288.891071162" Jan 23 18:11:33 crc kubenswrapper[4688]: I0123 18:11:33.951813 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" podStartSLOduration=3.951782917 podStartE2EDuration="3.951782917s" podCreationTimestamp="2026-01-23 18:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:11:33.950215849 +0000 UTC m=+288.946040310" watchObservedRunningTime="2026-01-23 18:11:33.951782917 +0000 UTC m=+288.947607358" Jan 23 18:11:45 crc kubenswrapper[4688]: I0123 18:11:45.147126 4688 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 18:12:02 crc kubenswrapper[4688]: I0123 18:12:02.674686 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76"] Jan 23 18:12:02 crc kubenswrapper[4688]: I0123 18:12:02.675937 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" podUID="2a45e36d-9529-4c9f-b92e-894c57ef9f5f" containerName="route-controller-manager" containerID="cri-o://f8a61777a0117c7f1d0e7edf90db25d264d5a838a690189fd5d9ef4a6136b3c7" gracePeriod=30 Jan 23 18:12:02 crc kubenswrapper[4688]: I0123 18:12:02.726545 4688 patch_prober.go:28] interesting pod/route-controller-manager-6cf858fd97-xql76 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Jan 23 18:12:02 crc kubenswrapper[4688]: I0123 18:12:02.726829 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" podUID="2a45e36d-9529-4c9f-b92e-894c57ef9f5f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.058431 4688 generic.go:334] "Generic (PLEG): container finished" podID="2a45e36d-9529-4c9f-b92e-894c57ef9f5f" containerID="f8a61777a0117c7f1d0e7edf90db25d264d5a838a690189fd5d9ef4a6136b3c7" exitCode=0 Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.058538 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" event={"ID":"2a45e36d-9529-4c9f-b92e-894c57ef9f5f","Type":"ContainerDied","Data":"f8a61777a0117c7f1d0e7edf90db25d264d5a838a690189fd5d9ef4a6136b3c7"} Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.678166 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.732714 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p"] Jan 23 18:12:03 crc kubenswrapper[4688]: E0123 18:12:03.735261 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a45e36d-9529-4c9f-b92e-894c57ef9f5f" containerName="route-controller-manager" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.735301 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a45e36d-9529-4c9f-b92e-894c57ef9f5f" containerName="route-controller-manager" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.735491 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a45e36d-9529-4c9f-b92e-894c57ef9f5f" containerName="route-controller-manager" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.736324 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.747325 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p"] Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.813180 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-serving-cert\") pod \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.813715 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jq2h\" (UniqueName: \"kubernetes.io/projected/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-kube-api-access-7jq2h\") pod \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.813888 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-config\") pod \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.814009 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-client-ca\") pod \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\" (UID: \"2a45e36d-9529-4c9f-b92e-894c57ef9f5f\") " Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.814387 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjq8\" (UniqueName: \"kubernetes.io/projected/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-kube-api-access-ljjq8\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.814487 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-config\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.814607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-client-ca\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.814706 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-serving-cert\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.814814 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-config" (OuterVolumeSpecName: "config") pod "2a45e36d-9529-4c9f-b92e-894c57ef9f5f" (UID: "2a45e36d-9529-4c9f-b92e-894c57ef9f5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.815012 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a45e36d-9529-4c9f-b92e-894c57ef9f5f" (UID: "2a45e36d-9529-4c9f-b92e-894c57ef9f5f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.821062 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-kube-api-access-7jq2h" (OuterVolumeSpecName: "kube-api-access-7jq2h") pod "2a45e36d-9529-4c9f-b92e-894c57ef9f5f" (UID: "2a45e36d-9529-4c9f-b92e-894c57ef9f5f"). InnerVolumeSpecName "kube-api-access-7jq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.833524 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a45e36d-9529-4c9f-b92e-894c57ef9f5f" (UID: "2a45e36d-9529-4c9f-b92e-894c57ef9f5f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916292 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjq8\" (UniqueName: \"kubernetes.io/projected/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-kube-api-access-ljjq8\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916369 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-config\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916415 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-client-ca\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916448 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-serving-cert\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916554 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916566 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jq2h\" (UniqueName: \"kubernetes.io/projected/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-kube-api-access-7jq2h\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916578 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.916587 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a45e36d-9529-4c9f-b92e-894c57ef9f5f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.917687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-client-ca\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.918481 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-config\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.920880 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-serving-cert\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:03 crc kubenswrapper[4688]: I0123 18:12:03.935115 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjq8\" (UniqueName: \"kubernetes.io/projected/3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39-kube-api-access-ljjq8\") pod \"route-controller-manager-6d8b5bddb-gpg2p\" (UID: \"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39\") " pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:04 crc kubenswrapper[4688]: I0123 18:12:04.066370 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" event={"ID":"2a45e36d-9529-4c9f-b92e-894c57ef9f5f","Type":"ContainerDied","Data":"6e6c332e2298f1483086b67ead19c3d23d42e1ad76a7a7bf8b6ed7311b432331"} Jan 23 18:12:04 crc kubenswrapper[4688]: I0123 18:12:04.066429 4688 scope.go:117] "RemoveContainer" containerID="f8a61777a0117c7f1d0e7edf90db25d264d5a838a690189fd5d9ef4a6136b3c7" Jan 23 18:12:04 crc kubenswrapper[4688]: I0123 18:12:04.066554 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76" Jan 23 18:12:04 crc kubenswrapper[4688]: I0123 18:12:04.070421 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:04 crc kubenswrapper[4688]: I0123 18:12:04.101027 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76"] Jan 23 18:12:04 crc kubenswrapper[4688]: I0123 18:12:04.107709 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf858fd97-xql76"] Jan 23 18:12:04 crc kubenswrapper[4688]: I0123 18:12:04.821738 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p"] Jan 23 18:12:05 crc kubenswrapper[4688]: I0123 18:12:05.074630 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" event={"ID":"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39","Type":"ContainerStarted","Data":"bea55a3dd0a597cbae0c05e6103a5bad5328231fcc4dbdde9c1782710f390160"} Jan 23 18:12:05 crc kubenswrapper[4688]: I0123 18:12:05.075302 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" event={"ID":"3a5d12e7-6fdf-4a5d-93b4-4c1ada582e39","Type":"ContainerStarted","Data":"6316c2d61bc352b3b3838a5c3b79e123409901d174e2e6cdf98b328e61c869dd"} Jan 23 18:12:05 crc kubenswrapper[4688]: I0123 18:12:05.075331 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:05 crc kubenswrapper[4688]: I0123 18:12:05.098835 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" podStartSLOduration=3.098800595 podStartE2EDuration="3.098800595s" podCreationTimestamp="2026-01-23 18:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:12:05.093860476 +0000 UTC m=+320.089684917" watchObservedRunningTime="2026-01-23 18:12:05.098800595 +0000 UTC m=+320.094625046" Jan 23 18:12:05 crc kubenswrapper[4688]: I0123 18:12:05.389773 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a45e36d-9529-4c9f-b92e-894c57ef9f5f" path="/var/lib/kubelet/pods/2a45e36d-9529-4c9f-b92e-894c57ef9f5f/volumes" Jan 23 18:12:05 crc kubenswrapper[4688]: I0123 18:12:05.497915 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d8b5bddb-gpg2p" Jan 23 18:12:22 crc kubenswrapper[4688]: I0123 18:12:22.649645 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-667c46fcf7-r4wx6"] Jan 23 18:12:22 crc kubenswrapper[4688]: I0123 18:12:22.653042 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" podUID="9429907b-3c4b-4759-87ff-91149d62812d" containerName="controller-manager" containerID="cri-o://e357da98badeeec9c8eadcb6bb4cc7d8677ade8a54583ca878fde507f05dc7b5" gracePeriod=30 Jan 23 18:12:22 crc kubenswrapper[4688]: I0123 18:12:22.716526 4688 patch_prober.go:28] interesting pod/controller-manager-667c46fcf7-r4wx6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 23 18:12:22 crc kubenswrapper[4688]: I0123 18:12:22.716664 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" podUID="9429907b-3c4b-4759-87ff-91149d62812d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.181084 4688 generic.go:334] "Generic (PLEG): container finished" podID="9429907b-3c4b-4759-87ff-91149d62812d" containerID="e357da98badeeec9c8eadcb6bb4cc7d8677ade8a54583ca878fde507f05dc7b5" exitCode=0 Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.181158 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" event={"ID":"9429907b-3c4b-4759-87ff-91149d62812d","Type":"ContainerDied","Data":"e357da98badeeec9c8eadcb6bb4cc7d8677ade8a54583ca878fde507f05dc7b5"} Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.593537 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.667569 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-proxy-ca-bundles\") pod \"9429907b-3c4b-4759-87ff-91149d62812d\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.667693 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9429907b-3c4b-4759-87ff-91149d62812d-serving-cert\") pod \"9429907b-3c4b-4759-87ff-91149d62812d\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.667813 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-client-ca\") pod \"9429907b-3c4b-4759-87ff-91149d62812d\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.667861 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-config\") pod \"9429907b-3c4b-4759-87ff-91149d62812d\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.667908 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h65x9\" (UniqueName: \"kubernetes.io/projected/9429907b-3c4b-4759-87ff-91149d62812d-kube-api-access-h65x9\") pod \"9429907b-3c4b-4759-87ff-91149d62812d\" (UID: \"9429907b-3c4b-4759-87ff-91149d62812d\") " Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.670025 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-client-ca" (OuterVolumeSpecName: "client-ca") pod "9429907b-3c4b-4759-87ff-91149d62812d" (UID: "9429907b-3c4b-4759-87ff-91149d62812d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.670296 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9429907b-3c4b-4759-87ff-91149d62812d" (UID: "9429907b-3c4b-4759-87ff-91149d62812d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.670378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-config" (OuterVolumeSpecName: "config") pod "9429907b-3c4b-4759-87ff-91149d62812d" (UID: "9429907b-3c4b-4759-87ff-91149d62812d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.677274 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9429907b-3c4b-4759-87ff-91149d62812d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9429907b-3c4b-4759-87ff-91149d62812d" (UID: "9429907b-3c4b-4759-87ff-91149d62812d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.681219 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9429907b-3c4b-4759-87ff-91149d62812d-kube-api-access-h65x9" (OuterVolumeSpecName: "kube-api-access-h65x9") pod "9429907b-3c4b-4759-87ff-91149d62812d" (UID: "9429907b-3c4b-4759-87ff-91149d62812d"). InnerVolumeSpecName "kube-api-access-h65x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.738212 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fdb847fbb-j6qpr"] Jan 23 18:12:23 crc kubenswrapper[4688]: E0123 18:12:23.738530 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9429907b-3c4b-4759-87ff-91149d62812d" containerName="controller-manager" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.738548 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9429907b-3c4b-4759-87ff-91149d62812d" containerName="controller-manager" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.738676 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9429907b-3c4b-4759-87ff-91149d62812d" containerName="controller-manager" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.739239 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.751654 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fdb847fbb-j6qpr"] Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.770161 4688 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.770238 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.770252 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h65x9\" (UniqueName: \"kubernetes.io/projected/9429907b-3c4b-4759-87ff-91149d62812d-kube-api-access-h65x9\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.770262 4688 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9429907b-3c4b-4759-87ff-91149d62812d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.770273 4688 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9429907b-3c4b-4759-87ff-91149d62812d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.871480 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe98824-e929-4b7c-9367-132bb54d1a69-serving-cert\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.871567 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-proxy-ca-bundles\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.871638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-client-ca\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.871675 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-config\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.871720 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ltl\" (UniqueName: \"kubernetes.io/projected/1fe98824-e929-4b7c-9367-132bb54d1a69-kube-api-access-95ltl\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.973965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-client-ca\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.974050 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-config\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.974075 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ltl\" (UniqueName: \"kubernetes.io/projected/1fe98824-e929-4b7c-9367-132bb54d1a69-kube-api-access-95ltl\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.974126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe98824-e929-4b7c-9367-132bb54d1a69-serving-cert\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.974150 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-proxy-ca-bundles\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.975378 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-client-ca\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.975677 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-proxy-ca-bundles\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.975893 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fe98824-e929-4b7c-9367-132bb54d1a69-config\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.979207 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fe98824-e929-4b7c-9367-132bb54d1a69-serving-cert\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:23 crc kubenswrapper[4688]: I0123 18:12:23.994513 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ltl\" (UniqueName: \"kubernetes.io/projected/1fe98824-e929-4b7c-9367-132bb54d1a69-kube-api-access-95ltl\") pod \"controller-manager-fdb847fbb-j6qpr\" (UID: \"1fe98824-e929-4b7c-9367-132bb54d1a69\") " pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:24 crc kubenswrapper[4688]: I0123 18:12:24.102393 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:24 crc kubenswrapper[4688]: I0123 18:12:24.241396 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" event={"ID":"9429907b-3c4b-4759-87ff-91149d62812d","Type":"ContainerDied","Data":"3d77fa86168a3aeb0c8ae231222ce67783cdd49fc911bbad40f4fc0326e0e5f5"} Jan 23 18:12:24 crc kubenswrapper[4688]: I0123 18:12:24.241923 4688 scope.go:117] "RemoveContainer" containerID="e357da98badeeec9c8eadcb6bb4cc7d8677ade8a54583ca878fde507f05dc7b5" Jan 23 18:12:24 crc kubenswrapper[4688]: I0123 18:12:24.241496 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-667c46fcf7-r4wx6" Jan 23 18:12:24 crc kubenswrapper[4688]: I0123 18:12:24.786062 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-667c46fcf7-r4wx6"] Jan 23 18:12:24 crc kubenswrapper[4688]: I0123 18:12:24.798422 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-667c46fcf7-r4wx6"] Jan 23 18:12:24 crc kubenswrapper[4688]: W0123 18:12:24.803008 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fe98824_e929_4b7c_9367_132bb54d1a69.slice/crio-9fba603afabe024dc4790fe93053999236df11e643bfefcb8949f6cf3523ea11 WatchSource:0}: Error finding container 9fba603afabe024dc4790fe93053999236df11e643bfefcb8949f6cf3523ea11: Status 404 returned error can't find the container with id 9fba603afabe024dc4790fe93053999236df11e643bfefcb8949f6cf3523ea11 Jan 23 18:12:24 crc kubenswrapper[4688]: I0123 18:12:24.803692 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fdb847fbb-j6qpr"] Jan 23 18:12:25 crc kubenswrapper[4688]: I0123 18:12:25.249213 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" event={"ID":"1fe98824-e929-4b7c-9367-132bb54d1a69","Type":"ContainerStarted","Data":"dbb18c0adaa8d622d466dca7a2ddcd73e0f03599d09bbb4e7579b2ebcc05e37d"} Jan 23 18:12:25 crc kubenswrapper[4688]: I0123 18:12:25.249812 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" event={"ID":"1fe98824-e929-4b7c-9367-132bb54d1a69","Type":"ContainerStarted","Data":"9fba603afabe024dc4790fe93053999236df11e643bfefcb8949f6cf3523ea11"} Jan 23 18:12:25 crc kubenswrapper[4688]: I0123 18:12:25.249862 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:25 crc kubenswrapper[4688]: I0123 18:12:25.258463 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" Jan 23 18:12:25 crc kubenswrapper[4688]: I0123 18:12:25.270613 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fdb847fbb-j6qpr" podStartSLOduration=3.270560702 podStartE2EDuration="3.270560702s" podCreationTimestamp="2026-01-23 18:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:12:25.268271034 +0000 UTC m=+340.264095495" watchObservedRunningTime="2026-01-23 18:12:25.270560702 +0000 UTC m=+340.266385143" Jan 23 18:12:25 crc kubenswrapper[4688]: I0123 18:12:25.362564 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9429907b-3c4b-4759-87ff-91149d62812d" path="/var/lib/kubelet/pods/9429907b-3c4b-4759-87ff-91149d62812d/volumes" Jan 23 18:13:06 crc kubenswrapper[4688]: I0123 18:13:06.966093 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:13:06 crc kubenswrapper[4688]: I0123 18:13:06.967012 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:13:36 crc kubenswrapper[4688]: I0123 18:13:36.965285 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:13:36 crc kubenswrapper[4688]: I0123 18:13:36.967311 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:14:06 crc kubenswrapper[4688]: I0123 18:14:06.965314 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:14:06 crc kubenswrapper[4688]: I0123 18:14:06.966194 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:14:06 crc kubenswrapper[4688]: I0123 18:14:06.966851 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:14:06 crc kubenswrapper[4688]: I0123 18:14:06.967598 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad9c21b368ee92434601444d389b9dec44412e1e582cc02198cf51f288c6f04f"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:14:06 crc kubenswrapper[4688]: I0123 18:14:06.967675 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://ad9c21b368ee92434601444d389b9dec44412e1e582cc02198cf51f288c6f04f" gracePeriod=600 Jan 23 18:14:07 crc kubenswrapper[4688]: I0123 18:14:07.932894 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="ad9c21b368ee92434601444d389b9dec44412e1e582cc02198cf51f288c6f04f" exitCode=0 Jan 23 18:14:07 crc kubenswrapper[4688]: I0123 18:14:07.933002 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"ad9c21b368ee92434601444d389b9dec44412e1e582cc02198cf51f288c6f04f"} Jan 23 18:14:07 crc kubenswrapper[4688]: I0123 18:14:07.933387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"0cadaf13fa81ded2e3a1c3d78a3ae5a1fa4294316faa30d6a26a5553349ddf99"} Jan 23 18:14:07 crc kubenswrapper[4688]: I0123 18:14:07.933421 4688 scope.go:117] "RemoveContainer" containerID="6127a92f4e9fe28196e5a93505ae853342f0c650786920d9d7a9869f82ac5490" Jan 23 18:14:45 crc kubenswrapper[4688]: I0123 18:14:45.904831 4688 scope.go:117] "RemoveContainer" containerID="050263e751f00fb49f56e053185173af422c5406a93bd60012102e44bb3562d4" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.193661 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj"] Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.194884 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.197827 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.199922 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.208727 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj"] Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.319637 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dtf7\" (UniqueName: \"kubernetes.io/projected/7040e9ba-84d7-420e-81ac-f1aac91d5a47-kube-api-access-9dtf7\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.319707 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7040e9ba-84d7-420e-81ac-f1aac91d5a47-secret-volume\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.319756 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7040e9ba-84d7-420e-81ac-f1aac91d5a47-config-volume\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.421109 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dtf7\" (UniqueName: \"kubernetes.io/projected/7040e9ba-84d7-420e-81ac-f1aac91d5a47-kube-api-access-9dtf7\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.421207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7040e9ba-84d7-420e-81ac-f1aac91d5a47-secret-volume\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.421288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7040e9ba-84d7-420e-81ac-f1aac91d5a47-config-volume\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.422410 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7040e9ba-84d7-420e-81ac-f1aac91d5a47-config-volume\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.434432 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7040e9ba-84d7-420e-81ac-f1aac91d5a47-secret-volume\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.440553 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dtf7\" (UniqueName: \"kubernetes.io/projected/7040e9ba-84d7-420e-81ac-f1aac91d5a47-kube-api-access-9dtf7\") pod \"collect-profiles-29486535-4xpnj\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.517625 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:00 crc kubenswrapper[4688]: I0123 18:15:00.921177 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj"] Jan 23 18:15:01 crc kubenswrapper[4688]: I0123 18:15:01.277810 4688 generic.go:334] "Generic (PLEG): container finished" podID="7040e9ba-84d7-420e-81ac-f1aac91d5a47" containerID="33c65188357ecddc115db8df4c1ad64ee0205ff703068f2e9047ac200fd3b57e" exitCode=0 Jan 23 18:15:01 crc kubenswrapper[4688]: I0123 18:15:01.277871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" event={"ID":"7040e9ba-84d7-420e-81ac-f1aac91d5a47","Type":"ContainerDied","Data":"33c65188357ecddc115db8df4c1ad64ee0205ff703068f2e9047ac200fd3b57e"} Jan 23 18:15:01 crc kubenswrapper[4688]: I0123 18:15:01.277910 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" event={"ID":"7040e9ba-84d7-420e-81ac-f1aac91d5a47","Type":"ContainerStarted","Data":"8f5c09858403a3b24534dc4679b6f7f2db3fda2cb9b67b02be4c5c17ca5977f0"} Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.501414 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.655945 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7040e9ba-84d7-420e-81ac-f1aac91d5a47-config-volume\") pod \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.656104 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dtf7\" (UniqueName: \"kubernetes.io/projected/7040e9ba-84d7-420e-81ac-f1aac91d5a47-kube-api-access-9dtf7\") pod \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.656140 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7040e9ba-84d7-420e-81ac-f1aac91d5a47-secret-volume\") pod \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\" (UID: \"7040e9ba-84d7-420e-81ac-f1aac91d5a47\") " Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.657312 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7040e9ba-84d7-420e-81ac-f1aac91d5a47-config-volume" (OuterVolumeSpecName: "config-volume") pod "7040e9ba-84d7-420e-81ac-f1aac91d5a47" (UID: "7040e9ba-84d7-420e-81ac-f1aac91d5a47"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.663442 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7040e9ba-84d7-420e-81ac-f1aac91d5a47-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7040e9ba-84d7-420e-81ac-f1aac91d5a47" (UID: "7040e9ba-84d7-420e-81ac-f1aac91d5a47"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.663499 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7040e9ba-84d7-420e-81ac-f1aac91d5a47-kube-api-access-9dtf7" (OuterVolumeSpecName: "kube-api-access-9dtf7") pod "7040e9ba-84d7-420e-81ac-f1aac91d5a47" (UID: "7040e9ba-84d7-420e-81ac-f1aac91d5a47"). InnerVolumeSpecName "kube-api-access-9dtf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.758551 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dtf7\" (UniqueName: \"kubernetes.io/projected/7040e9ba-84d7-420e-81ac-f1aac91d5a47-kube-api-access-9dtf7\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.758629 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7040e9ba-84d7-420e-81ac-f1aac91d5a47-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:02 crc kubenswrapper[4688]: I0123 18:15:02.758656 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7040e9ba-84d7-420e-81ac-f1aac91d5a47-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:15:03 crc kubenswrapper[4688]: I0123 18:15:03.293587 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" event={"ID":"7040e9ba-84d7-420e-81ac-f1aac91d5a47","Type":"ContainerDied","Data":"8f5c09858403a3b24534dc4679b6f7f2db3fda2cb9b67b02be4c5c17ca5977f0"} Jan 23 18:15:03 crc kubenswrapper[4688]: I0123 18:15:03.294085 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f5c09858403a3b24534dc4679b6f7f2db3fda2cb9b67b02be4c5c17ca5977f0" Jan 23 18:15:03 crc kubenswrapper[4688]: I0123 18:15:03.293652 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj" Jan 23 18:16:36 crc kubenswrapper[4688]: I0123 18:16:36.965163 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:16:36 crc kubenswrapper[4688]: I0123 18:16:36.966224 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:16:45 crc kubenswrapper[4688]: I0123 18:16:45.950680 4688 scope.go:117] "RemoveContainer" containerID="0d037ccd2477bf86059a2cfcd4847772156a04ab9495e89c8435e3adefa6ee80" Jan 23 18:17:06 crc kubenswrapper[4688]: I0123 18:17:06.965777 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:17:06 crc kubenswrapper[4688]: I0123 18:17:06.966730 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.206888 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz"] Jan 23 18:17:07 crc kubenswrapper[4688]: E0123 18:17:07.207410 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7040e9ba-84d7-420e-81ac-f1aac91d5a47" containerName="collect-profiles" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.207506 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7040e9ba-84d7-420e-81ac-f1aac91d5a47" containerName="collect-profiles" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.207728 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7040e9ba-84d7-420e-81ac-f1aac91d5a47" containerName="collect-profiles" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.208599 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.212149 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-rsccw"] Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.212355 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.212492 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.212642 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bs2p9" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.213321 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rsccw" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.215233 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2lnzx" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.238278 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q4zqg"] Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.239469 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.243748 4688 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-btjss" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.245532 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz"] Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.258601 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rsccw"] Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.265249 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q4zqg"] Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.353825 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6w5\" (UniqueName: \"kubernetes.io/projected/893e289a-c400-40f2-b2cd-a9815c0cf488-kube-api-access-pb6w5\") pod \"cert-manager-webhook-687f57d79b-q4zqg\" (UID: \"893e289a-c400-40f2-b2cd-a9815c0cf488\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.353923 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkdwr\" (UniqueName: \"kubernetes.io/projected/9bf3e910-f2fd-4f92-b345-422c1570bd89-kube-api-access-pkdwr\") pod \"cert-manager-858654f9db-rsccw\" (UID: \"9bf3e910-f2fd-4f92-b345-422c1570bd89\") " pod="cert-manager/cert-manager-858654f9db-rsccw" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.354061 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhc6t\" (UniqueName: \"kubernetes.io/projected/5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e-kube-api-access-zhc6t\") pod \"cert-manager-cainjector-cf98fcc89-vgqkz\" (UID: \"5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.456472 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkdwr\" (UniqueName: \"kubernetes.io/projected/9bf3e910-f2fd-4f92-b345-422c1570bd89-kube-api-access-pkdwr\") pod \"cert-manager-858654f9db-rsccw\" (UID: \"9bf3e910-f2fd-4f92-b345-422c1570bd89\") " pod="cert-manager/cert-manager-858654f9db-rsccw" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.456581 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc6t\" (UniqueName: \"kubernetes.io/projected/5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e-kube-api-access-zhc6t\") pod \"cert-manager-cainjector-cf98fcc89-vgqkz\" (UID: \"5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.456673 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb6w5\" (UniqueName: \"kubernetes.io/projected/893e289a-c400-40f2-b2cd-a9815c0cf488-kube-api-access-pb6w5\") pod \"cert-manager-webhook-687f57d79b-q4zqg\" (UID: \"893e289a-c400-40f2-b2cd-a9815c0cf488\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.480336 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhc6t\" (UniqueName: \"kubernetes.io/projected/5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e-kube-api-access-zhc6t\") pod \"cert-manager-cainjector-cf98fcc89-vgqkz\" (UID: \"5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.482202 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb6w5\" (UniqueName: \"kubernetes.io/projected/893e289a-c400-40f2-b2cd-a9815c0cf488-kube-api-access-pb6w5\") pod \"cert-manager-webhook-687f57d79b-q4zqg\" (UID: \"893e289a-c400-40f2-b2cd-a9815c0cf488\") " pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.482797 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkdwr\" (UniqueName: \"kubernetes.io/projected/9bf3e910-f2fd-4f92-b345-422c1570bd89-kube-api-access-pkdwr\") pod \"cert-manager-858654f9db-rsccw\" (UID: \"9bf3e910-f2fd-4f92-b345-422c1570bd89\") " pod="cert-manager/cert-manager-858654f9db-rsccw" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.537244 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.555847 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rsccw" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.571335 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.792096 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz"] Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.807648 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.860172 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-q4zqg"] Jan 23 18:17:07 crc kubenswrapper[4688]: I0123 18:17:07.894764 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rsccw"] Jan 23 18:17:07 crc kubenswrapper[4688]: W0123 18:17:07.902477 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf3e910_f2fd_4f92_b345_422c1570bd89.slice/crio-dfcc2e965520845fc92abfceba1e9bcf702ec52a68157628a482bcc8bf203a3e WatchSource:0}: Error finding container dfcc2e965520845fc92abfceba1e9bcf702ec52a68157628a482bcc8bf203a3e: Status 404 returned error can't find the container with id dfcc2e965520845fc92abfceba1e9bcf702ec52a68157628a482bcc8bf203a3e Jan 23 18:17:08 crc kubenswrapper[4688]: I0123 18:17:08.243937 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" event={"ID":"893e289a-c400-40f2-b2cd-a9815c0cf488","Type":"ContainerStarted","Data":"4b06c057d04988018db126dd3373eb93a14f4932a6647706a99eb9297230f558"} Jan 23 18:17:08 crc kubenswrapper[4688]: I0123 18:17:08.245603 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rsccw" event={"ID":"9bf3e910-f2fd-4f92-b345-422c1570bd89","Type":"ContainerStarted","Data":"dfcc2e965520845fc92abfceba1e9bcf702ec52a68157628a482bcc8bf203a3e"} Jan 23 18:17:08 crc kubenswrapper[4688]: I0123 18:17:08.246750 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" event={"ID":"5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e","Type":"ContainerStarted","Data":"c259440c7795f51e064147e527800fbc58e1b5e52efa852d99a8763df2a7ef72"} Jan 23 18:17:13 crc kubenswrapper[4688]: I0123 18:17:13.291653 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rsccw" event={"ID":"9bf3e910-f2fd-4f92-b345-422c1570bd89","Type":"ContainerStarted","Data":"98e3cf47c605f2fa7ad9d4c07a32296548acfd2cf31ed360c2f621ec2a19b60e"} Jan 23 18:17:13 crc kubenswrapper[4688]: I0123 18:17:13.294628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" event={"ID":"5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e","Type":"ContainerStarted","Data":"2e1ce370d1c6028ccf6d87d04c84eb3eb14929a5075dfe6ef8d650e4f34e1235"} Jan 23 18:17:13 crc kubenswrapper[4688]: I0123 18:17:13.296255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" event={"ID":"893e289a-c400-40f2-b2cd-a9815c0cf488","Type":"ContainerStarted","Data":"e35ce5db4e35fcbbfb51e8fd72579d215a84d1125f8abd6c9cbcd8cf67e3ff33"} Jan 23 18:17:13 crc kubenswrapper[4688]: I0123 18:17:13.296712 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" Jan 23 18:17:13 crc kubenswrapper[4688]: I0123 18:17:13.488898 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-rsccw" podStartSLOduration=2.112676218 podStartE2EDuration="6.488870981s" podCreationTimestamp="2026-01-23 18:17:07 +0000 UTC" firstStartedPulling="2026-01-23 18:17:07.904857858 +0000 UTC m=+622.900682299" lastFinishedPulling="2026-01-23 18:17:12.281052601 +0000 UTC m=+627.276877062" observedRunningTime="2026-01-23 18:17:13.48713613 +0000 UTC m=+628.482960571" watchObservedRunningTime="2026-01-23 18:17:13.488870981 +0000 UTC m=+628.484695422" Jan 23 18:17:13 crc kubenswrapper[4688]: I0123 18:17:13.531407 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" podStartSLOduration=2.053217417 podStartE2EDuration="6.531375748s" podCreationTimestamp="2026-01-23 18:17:07 +0000 UTC" firstStartedPulling="2026-01-23 18:17:07.867895492 +0000 UTC m=+622.863719933" lastFinishedPulling="2026-01-23 18:17:12.346053803 +0000 UTC m=+627.341878264" observedRunningTime="2026-01-23 18:17:13.511380446 +0000 UTC m=+628.507204907" watchObservedRunningTime="2026-01-23 18:17:13.531375748 +0000 UTC m=+628.527200189" Jan 23 18:17:13 crc kubenswrapper[4688]: I0123 18:17:13.533204 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgqkz" podStartSLOduration=2.021852594 podStartE2EDuration="6.53317247s" podCreationTimestamp="2026-01-23 18:17:07 +0000 UTC" firstStartedPulling="2026-01-23 18:17:07.806935468 +0000 UTC m=+622.802759909" lastFinishedPulling="2026-01-23 18:17:12.318255344 +0000 UTC m=+627.314079785" observedRunningTime="2026-01-23 18:17:13.530361128 +0000 UTC m=+628.526185569" watchObservedRunningTime="2026-01-23 18:17:13.53317247 +0000 UTC m=+628.528996911" Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.659552 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsqbq"] Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.660612 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-controller" containerID="cri-o://8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.660723 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="nbdb" containerID="cri-o://8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.660723 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.660796 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-acl-logging" containerID="cri-o://b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.660780 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-node" containerID="cri-o://75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.660926 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="northd" containerID="cri-o://f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.661169 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="sbdb" containerID="cri-o://079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.714235 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" containerID="cri-o://286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" gracePeriod=30 Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.941621 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/2.log" Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.944547 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovn-acl-logging/0.log" Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.945516 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovn-controller/0.log" Jan 23 18:17:16 crc kubenswrapper[4688]: I0123 18:17:16.946306 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009066 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bl7rf"] Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009457 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009483 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009502 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-acl-logging" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009514 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-acl-logging" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009526 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009535 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009551 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009558 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009570 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-node" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009579 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-node" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009597 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kubecfg-setup" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009604 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kubecfg-setup" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009616 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="northd" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009625 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="northd" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009638 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="sbdb" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009644 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="sbdb" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009657 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="nbdb" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009664 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="nbdb" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009671 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009677 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009686 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009691 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.009698 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009704 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009828 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009843 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009854 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009867 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009878 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009889 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="northd" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009898 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovnkube-controller" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009909 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="kube-rbac-proxy-node" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009919 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="nbdb" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009929 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="sbdb" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.009938 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="336645d6-da82-4dba-9436-4196367fb547" containerName="ovn-acl-logging" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.012124 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.051879 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-var-lib-openvswitch\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.051955 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-node-log\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052000 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-bin\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052032 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-log-socket\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052071 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-ovn-kubernetes\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052110 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-kubelet\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052214 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-ovn\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052236 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-etc-openvswitch\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052300 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-systemd\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052341 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-netns\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052368 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-systemd-units\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052425 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-slash\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052466 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-config\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052483 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052498 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-script-lib\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052525 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/336645d6-da82-4dba-9436-4196367fb547-ovn-node-metrics-cert\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sgmr\" (UniqueName: \"kubernetes.io/projected/336645d6-da82-4dba-9436-4196367fb547-kube-api-access-5sgmr\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052585 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-var-lib-cni-networks-ovn-kubernetes\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052612 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-openvswitch\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052650 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-netd\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052716 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-env-overrides\") pod \"336645d6-da82-4dba-9436-4196367fb547\" (UID: \"336645d6-da82-4dba-9436-4196367fb547\") " Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052509 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-node-log" (OuterVolumeSpecName: "node-log") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052550 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052549 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052974 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-etc-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-log-socket\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053070 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053098 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-slash\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053125 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-run-ovn-kubernetes\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053152 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-systemd-units\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053179 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-ovn\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053216 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-node-log\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053245 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-cni-netd\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053267 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-run-netns\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053305 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-systemd\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053331 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-kubelet\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053356 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-ovnkube-script-lib\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053397 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-env-overrides\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053433 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-ovnkube-config\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053462 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-var-lib-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053485 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a43396-14d2-4924-9585-8a23f601961c-ovn-node-metrics-cert\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053513 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053543 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlvt2\" (UniqueName: \"kubernetes.io/projected/98a43396-14d2-4924-9585-8a23f601961c-kube-api-access-nlvt2\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053570 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-cni-bin\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053621 4688 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053638 4688 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053650 4688 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053663 4688 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052577 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052581 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-log-socket" (OuterVolumeSpecName: "log-socket") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052597 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052611 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.052633 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053097 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053209 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053243 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.053812 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.054050 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.054165 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-slash" (OuterVolumeSpecName: "host-slash") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.054340 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.054672 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.062573 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336645d6-da82-4dba-9436-4196367fb547-kube-api-access-5sgmr" (OuterVolumeSpecName: "kube-api-access-5sgmr") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "kube-api-access-5sgmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.063533 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336645d6-da82-4dba-9436-4196367fb547-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.075339 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "336645d6-da82-4dba-9436-4196367fb547" (UID: "336645d6-da82-4dba-9436-4196367fb547"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155455 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155539 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-slash\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155569 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-systemd-units\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155623 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-run-ovn-kubernetes\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155661 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-ovn\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155637 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155754 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-node-log\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155760 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-run-ovn-kubernetes\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-slash\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155676 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-node-log\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-cni-netd\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155862 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-run-netns\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-ovn\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155882 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-cni-netd\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155891 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-systemd\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155964 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-kubelet\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155879 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-systemd-units\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155909 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-run-netns\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155993 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-ovnkube-script-lib\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156029 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-kubelet\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.155910 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-run-systemd\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156232 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-env-overrides\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156316 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-ovnkube-config\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156360 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-var-lib-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156388 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a43396-14d2-4924-9585-8a23f601961c-ovn-node-metrics-cert\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156470 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlvt2\" (UniqueName: \"kubernetes.io/projected/98a43396-14d2-4924-9585-8a23f601961c-kube-api-access-nlvt2\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156473 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-var-lib-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156517 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156532 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-cni-bin\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156582 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-etc-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156660 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-log-socket\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156756 4688 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156773 4688 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156789 4688 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156807 4688 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156820 4688 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156830 4688 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156841 4688 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156850 4688 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156860 4688 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156868 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156879 4688 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/336645d6-da82-4dba-9436-4196367fb547-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156893 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/336645d6-da82-4dba-9436-4196367fb547-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156906 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sgmr\" (UniqueName: \"kubernetes.io/projected/336645d6-da82-4dba-9436-4196367fb547-kube-api-access-5sgmr\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156919 4688 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156932 4688 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156945 4688 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336645d6-da82-4dba-9436-4196367fb547-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156985 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-ovnkube-script-lib\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.157010 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-host-cni-bin\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.156986 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-log-socket\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.157039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a43396-14d2-4924-9585-8a23f601961c-etc-openvswitch\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.157437 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-ovnkube-config\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.158521 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a43396-14d2-4924-9585-8a23f601961c-env-overrides\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.160726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a43396-14d2-4924-9585-8a23f601961c-ovn-node-metrics-cert\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.176842 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlvt2\" (UniqueName: \"kubernetes.io/projected/98a43396-14d2-4924-9585-8a23f601961c-kube-api-access-nlvt2\") pod \"ovnkube-node-bl7rf\" (UID: \"98a43396-14d2-4924-9585-8a23f601961c\") " pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.331840 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.334504 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gf4sc_39fdea6e-e9b8-4fb4-9375-aaf302a204d3/kube-multus/1.log" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.335255 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gf4sc_39fdea6e-e9b8-4fb4-9375-aaf302a204d3/kube-multus/0.log" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.335318 4688 generic.go:334] "Generic (PLEG): container finished" podID="39fdea6e-e9b8-4fb4-9375-aaf302a204d3" containerID="12722219e8098865c349ba7cb9cc6b83b50eda61f6c7da981cce9c870b9f4056" exitCode=2 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.335390 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gf4sc" event={"ID":"39fdea6e-e9b8-4fb4-9375-aaf302a204d3","Type":"ContainerDied","Data":"12722219e8098865c349ba7cb9cc6b83b50eda61f6c7da981cce9c870b9f4056"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.335453 4688 scope.go:117] "RemoveContainer" containerID="18856ff19ea8d98c0365ffdb12682824e04e1c4da7e32e2b7e774e1a433b7890" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.336273 4688 scope.go:117] "RemoveContainer" containerID="12722219e8098865c349ba7cb9cc6b83b50eda61f6c7da981cce9c870b9f4056" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.336629 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gf4sc_openshift-multus(39fdea6e-e9b8-4fb4-9375-aaf302a204d3)\"" pod="openshift-multus/multus-gf4sc" podUID="39fdea6e-e9b8-4fb4-9375-aaf302a204d3" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.340886 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovnkube-controller/2.log" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.344316 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovn-acl-logging/0.log" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.344793 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zsqbq_336645d6-da82-4dba-9436-4196367fb547/ovn-controller/0.log" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345307 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" exitCode=0 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345326 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" exitCode=0 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345334 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" exitCode=0 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345340 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" exitCode=0 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345347 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" exitCode=0 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345353 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" exitCode=0 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345359 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" exitCode=143 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345366 4688 generic.go:334] "Generic (PLEG): container finished" podID="336645d6-da82-4dba-9436-4196367fb547" containerID="8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" exitCode=143 Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345383 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345417 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345425 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345540 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345553 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345563 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345575 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345587 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345600 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345606 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345612 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345620 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345625 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345630 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345635 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345641 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345646 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345653 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345661 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345667 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345673 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345678 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345684 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345690 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345696 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345702 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345708 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345716 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345724 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345734 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345740 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345746 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345751 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345758 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345763 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345768 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345773 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345779 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345784 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345791 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zsqbq" event={"ID":"336645d6-da82-4dba-9436-4196367fb547","Type":"ContainerDied","Data":"cca7942a8d8f4a15b1ab719bca31e57a69c36c06aa41c8028311ce9d5e0d1b6f"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345799 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345805 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345810 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345815 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345820 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345825 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345830 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345836 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345842 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.345847 4688 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.381490 4688 scope.go:117] "RemoveContainer" containerID="286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.398294 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsqbq"] Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.413532 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zsqbq"] Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.415677 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.438381 4688 scope.go:117] "RemoveContainer" containerID="079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.467533 4688 scope.go:117] "RemoveContainer" containerID="8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.484938 4688 scope.go:117] "RemoveContainer" containerID="f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.507327 4688 scope.go:117] "RemoveContainer" containerID="486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.576952 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-q4zqg" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.582356 4688 scope.go:117] "RemoveContainer" containerID="75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.603005 4688 scope.go:117] "RemoveContainer" containerID="b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.619421 4688 scope.go:117] "RemoveContainer" containerID="8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.636040 4688 scope.go:117] "RemoveContainer" containerID="965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.657446 4688 scope.go:117] "RemoveContainer" containerID="286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.658643 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": container with ID starting with 286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460 not found: ID does not exist" containerID="286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.658733 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} err="failed to get container status \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": rpc error: code = NotFound desc = could not find container \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": container with ID starting with 286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.658809 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.659755 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": container with ID starting with bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8 not found: ID does not exist" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.659815 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} err="failed to get container status \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": rpc error: code = NotFound desc = could not find container \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": container with ID starting with bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.659856 4688 scope.go:117] "RemoveContainer" containerID="079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.660354 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": container with ID starting with 079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c not found: ID does not exist" containerID="079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.660405 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} err="failed to get container status \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": rpc error: code = NotFound desc = could not find container \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": container with ID starting with 079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.660448 4688 scope.go:117] "RemoveContainer" containerID="8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.660791 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": container with ID starting with 8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce not found: ID does not exist" containerID="8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.660847 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} err="failed to get container status \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": rpc error: code = NotFound desc = could not find container \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": container with ID starting with 8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.660880 4688 scope.go:117] "RemoveContainer" containerID="f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.661348 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": container with ID starting with f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626 not found: ID does not exist" containerID="f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.661403 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} err="failed to get container status \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": rpc error: code = NotFound desc = could not find container \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": container with ID starting with f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.661430 4688 scope.go:117] "RemoveContainer" containerID="486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.661843 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": container with ID starting with 486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07 not found: ID does not exist" containerID="486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.661882 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} err="failed to get container status \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": rpc error: code = NotFound desc = could not find container \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": container with ID starting with 486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.661925 4688 scope.go:117] "RemoveContainer" containerID="75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.662675 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": container with ID starting with 75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d not found: ID does not exist" containerID="75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.662708 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} err="failed to get container status \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": rpc error: code = NotFound desc = could not find container \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": container with ID starting with 75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.662732 4688 scope.go:117] "RemoveContainer" containerID="b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.663065 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": container with ID starting with b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf not found: ID does not exist" containerID="b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.663107 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} err="failed to get container status \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": rpc error: code = NotFound desc = could not find container \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": container with ID starting with b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.663146 4688 scope.go:117] "RemoveContainer" containerID="8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.663576 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": container with ID starting with 8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad not found: ID does not exist" containerID="8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.663616 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} err="failed to get container status \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": rpc error: code = NotFound desc = could not find container \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": container with ID starting with 8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.663642 4688 scope.go:117] "RemoveContainer" containerID="965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf" Jan 23 18:17:17 crc kubenswrapper[4688]: E0123 18:17:17.663959 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": container with ID starting with 965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf not found: ID does not exist" containerID="965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.663985 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} err="failed to get container status \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": rpc error: code = NotFound desc = could not find container \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": container with ID starting with 965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.663999 4688 scope.go:117] "RemoveContainer" containerID="286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.664324 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} err="failed to get container status \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": rpc error: code = NotFound desc = could not find container \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": container with ID starting with 286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.664351 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.664767 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} err="failed to get container status \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": rpc error: code = NotFound desc = could not find container \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": container with ID starting with bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.664817 4688 scope.go:117] "RemoveContainer" containerID="079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.665291 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} err="failed to get container status \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": rpc error: code = NotFound desc = could not find container \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": container with ID starting with 079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.665328 4688 scope.go:117] "RemoveContainer" containerID="8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.665619 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} err="failed to get container status \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": rpc error: code = NotFound desc = could not find container \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": container with ID starting with 8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.665646 4688 scope.go:117] "RemoveContainer" containerID="f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.666042 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} err="failed to get container status \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": rpc error: code = NotFound desc = could not find container \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": container with ID starting with f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.666072 4688 scope.go:117] "RemoveContainer" containerID="486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.666599 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} err="failed to get container status \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": rpc error: code = NotFound desc = could not find container \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": container with ID starting with 486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.666628 4688 scope.go:117] "RemoveContainer" containerID="75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.666947 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} err="failed to get container status \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": rpc error: code = NotFound desc = could not find container \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": container with ID starting with 75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.666968 4688 scope.go:117] "RemoveContainer" containerID="b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.667297 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} err="failed to get container status \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": rpc error: code = NotFound desc = could not find container \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": container with ID starting with b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.667355 4688 scope.go:117] "RemoveContainer" containerID="8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.667794 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} err="failed to get container status \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": rpc error: code = NotFound desc = could not find container \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": container with ID starting with 8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.667817 4688 scope.go:117] "RemoveContainer" containerID="965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.668051 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} err="failed to get container status \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": rpc error: code = NotFound desc = could not find container \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": container with ID starting with 965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.668085 4688 scope.go:117] "RemoveContainer" containerID="286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.668398 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} err="failed to get container status \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": rpc error: code = NotFound desc = could not find container \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": container with ID starting with 286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.668433 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.668859 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} err="failed to get container status \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": rpc error: code = NotFound desc = could not find container \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": container with ID starting with bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.668905 4688 scope.go:117] "RemoveContainer" containerID="079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.669362 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} err="failed to get container status \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": rpc error: code = NotFound desc = could not find container \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": container with ID starting with 079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.669413 4688 scope.go:117] "RemoveContainer" containerID="8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.669704 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} err="failed to get container status \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": rpc error: code = NotFound desc = could not find container \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": container with ID starting with 8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.669727 4688 scope.go:117] "RemoveContainer" containerID="f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.670095 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} err="failed to get container status \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": rpc error: code = NotFound desc = could not find container \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": container with ID starting with f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.670121 4688 scope.go:117] "RemoveContainer" containerID="486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.670393 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} err="failed to get container status \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": rpc error: code = NotFound desc = could not find container \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": container with ID starting with 486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.670419 4688 scope.go:117] "RemoveContainer" containerID="75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.670794 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} err="failed to get container status \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": rpc error: code = NotFound desc = could not find container \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": container with ID starting with 75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.670844 4688 scope.go:117] "RemoveContainer" containerID="b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.671159 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} err="failed to get container status \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": rpc error: code = NotFound desc = could not find container \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": container with ID starting with b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.671200 4688 scope.go:117] "RemoveContainer" containerID="8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.671481 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} err="failed to get container status \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": rpc error: code = NotFound desc = could not find container \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": container with ID starting with 8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.671527 4688 scope.go:117] "RemoveContainer" containerID="965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.671825 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} err="failed to get container status \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": rpc error: code = NotFound desc = could not find container \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": container with ID starting with 965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.671848 4688 scope.go:117] "RemoveContainer" containerID="286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.672220 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460"} err="failed to get container status \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": rpc error: code = NotFound desc = could not find container \"286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460\": container with ID starting with 286616dff6b52be360d25c2055e27eb217e035e75680bbd8a922e15cf7224460 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.672250 4688 scope.go:117] "RemoveContainer" containerID="bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.672811 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8"} err="failed to get container status \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": rpc error: code = NotFound desc = could not find container \"bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8\": container with ID starting with bfc99ea0d4a2f1f8f73f9b7cdb3aeaf87713e7834b3e6b22bf572965881c38d8 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.672861 4688 scope.go:117] "RemoveContainer" containerID="079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.673263 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c"} err="failed to get container status \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": rpc error: code = NotFound desc = could not find container \"079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c\": container with ID starting with 079b6c1925afa38e077e92f8e9f49ef71a32a037f466426fc16571bd7512de5c not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.673306 4688 scope.go:117] "RemoveContainer" containerID="8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.673608 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce"} err="failed to get container status \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": rpc error: code = NotFound desc = could not find container \"8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce\": container with ID starting with 8faa98d02b5b6f436ecf33f8ba4e3f620231fb2e2b1543d3f51784a81dcdc6ce not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.673632 4688 scope.go:117] "RemoveContainer" containerID="f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.673951 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626"} err="failed to get container status \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": rpc error: code = NotFound desc = could not find container \"f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626\": container with ID starting with f7b8c51814e284084748d0787a8d117820d2a169b0ce8553f12cb2fc5d378626 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.673980 4688 scope.go:117] "RemoveContainer" containerID="486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.674517 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07"} err="failed to get container status \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": rpc error: code = NotFound desc = could not find container \"486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07\": container with ID starting with 486741f94103f0899e7604cdce29f734805daa6644ab3a6fe1198d3560f89a07 not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.674572 4688 scope.go:117] "RemoveContainer" containerID="75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.674863 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d"} err="failed to get container status \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": rpc error: code = NotFound desc = could not find container \"75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d\": container with ID starting with 75e79bdd169f1726b9460d50495932eb21e80bbf319a2835099bc7dedb81f39d not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.674888 4688 scope.go:117] "RemoveContainer" containerID="b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.675892 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf"} err="failed to get container status \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": rpc error: code = NotFound desc = could not find container \"b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf\": container with ID starting with b4c0e82be52cc652dd2b74389a69a8d4e2a4488bf9ceaed6773e3575a74722cf not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.675926 4688 scope.go:117] "RemoveContainer" containerID="8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.676489 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad"} err="failed to get container status \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": rpc error: code = NotFound desc = could not find container \"8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad\": container with ID starting with 8b35d2e34bafa29581ec2f3d78c8613bde07f3e8d17ae61acc97fe41c46eccad not found: ID does not exist" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.676524 4688 scope.go:117] "RemoveContainer" containerID="965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf" Jan 23 18:17:17 crc kubenswrapper[4688]: I0123 18:17:17.677003 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf"} err="failed to get container status \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": rpc error: code = NotFound desc = could not find container \"965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf\": container with ID starting with 965bfc479d20b5f49787a11aed38b71e9714c96fe6a31e91c34f0a17137e2baf not found: ID does not exist" Jan 23 18:17:18 crc kubenswrapper[4688]: I0123 18:17:18.354694 4688 generic.go:334] "Generic (PLEG): container finished" podID="98a43396-14d2-4924-9585-8a23f601961c" containerID="5a704e6c9ddb632fa96adceb1c0324dfad59f7cb4efe434d43459b68c055fa89" exitCode=0 Jan 23 18:17:18 crc kubenswrapper[4688]: I0123 18:17:18.354788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerDied","Data":"5a704e6c9ddb632fa96adceb1c0324dfad59f7cb4efe434d43459b68c055fa89"} Jan 23 18:17:18 crc kubenswrapper[4688]: I0123 18:17:18.355369 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"02b0b7377f974a6b6d362c2f3ee9c8a4abd1730114110b218bf31df1aaa6982d"} Jan 23 18:17:18 crc kubenswrapper[4688]: I0123 18:17:18.358044 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gf4sc_39fdea6e-e9b8-4fb4-9375-aaf302a204d3/kube-multus/1.log" Jan 23 18:17:19 crc kubenswrapper[4688]: I0123 18:17:19.365159 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336645d6-da82-4dba-9436-4196367fb547" path="/var/lib/kubelet/pods/336645d6-da82-4dba-9436-4196367fb547/volumes" Jan 23 18:17:19 crc kubenswrapper[4688]: I0123 18:17:19.371824 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"363d4468a92789878c9757bb96b187c841e146b71ed6eb0253c26b00838ef94b"} Jan 23 18:17:19 crc kubenswrapper[4688]: I0123 18:17:19.372021 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"ddc1c718e3e73a1e24d1751b9f87ebed5c79c6039e6fc05322bf6ad5fa604b64"} Jan 23 18:17:19 crc kubenswrapper[4688]: I0123 18:17:19.372089 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"3fdacdc8d0ac4076b6c80c5cba59514e2f045609bb4a51603cbee2a4d9a1a735"} Jan 23 18:17:19 crc kubenswrapper[4688]: I0123 18:17:19.372232 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"ddad1f91e5fbca66d4ae94f64c87885cee1879a453d6aa2d6b6d0a5be21ca351"} Jan 23 18:17:19 crc kubenswrapper[4688]: I0123 18:17:19.372268 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"af4ffe8bc2053881f37ca159045d61311f2d61da844361186cd6d48792559b4a"} Jan 23 18:17:19 crc kubenswrapper[4688]: I0123 18:17:19.372350 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"93407bb1a9c21c0f8988dc81b6e48419ab23fb5bfc5e347072a8122aaa48e574"} Jan 23 18:17:21 crc kubenswrapper[4688]: I0123 18:17:21.390092 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"48115422464fdb432b7a5f2aa5b994d01e9084aeabe42f4a0844e1474409a195"} Jan 23 18:17:24 crc kubenswrapper[4688]: I0123 18:17:24.414860 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" event={"ID":"98a43396-14d2-4924-9585-8a23f601961c","Type":"ContainerStarted","Data":"76dc03df14b0822ec8456e2db98893c065cb62c1dcc072af2b6f7be1d41ffb45"} Jan 23 18:17:24 crc kubenswrapper[4688]: I0123 18:17:24.415387 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:24 crc kubenswrapper[4688]: I0123 18:17:24.415401 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:24 crc kubenswrapper[4688]: I0123 18:17:24.451990 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" podStartSLOduration=8.451963687 podStartE2EDuration="8.451963687s" podCreationTimestamp="2026-01-23 18:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:17:24.449854906 +0000 UTC m=+639.445679357" watchObservedRunningTime="2026-01-23 18:17:24.451963687 +0000 UTC m=+639.447788128" Jan 23 18:17:24 crc kubenswrapper[4688]: I0123 18:17:24.454563 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:25 crc kubenswrapper[4688]: I0123 18:17:25.423351 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:25 crc kubenswrapper[4688]: I0123 18:17:25.502104 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:32 crc kubenswrapper[4688]: I0123 18:17:32.357257 4688 scope.go:117] "RemoveContainer" containerID="12722219e8098865c349ba7cb9cc6b83b50eda61f6c7da981cce9c870b9f4056" Jan 23 18:17:33 crc kubenswrapper[4688]: I0123 18:17:33.599579 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gf4sc_39fdea6e-e9b8-4fb4-9375-aaf302a204d3/kube-multus/1.log" Jan 23 18:17:33 crc kubenswrapper[4688]: I0123 18:17:33.600089 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gf4sc" event={"ID":"39fdea6e-e9b8-4fb4-9375-aaf302a204d3","Type":"ContainerStarted","Data":"5dedde077c66567da43c85c7dca00fa461ad2f24449e112dce09fc76c4b7067b"} Jan 23 18:17:36 crc kubenswrapper[4688]: I0123 18:17:36.964917 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:17:36 crc kubenswrapper[4688]: I0123 18:17:36.965446 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:17:36 crc kubenswrapper[4688]: I0123 18:17:36.965511 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:17:36 crc kubenswrapper[4688]: I0123 18:17:36.966366 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cadaf13fa81ded2e3a1c3d78a3ae5a1fa4294316faa30d6a26a5553349ddf99"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:17:36 crc kubenswrapper[4688]: I0123 18:17:36.966445 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://0cadaf13fa81ded2e3a1c3d78a3ae5a1fa4294316faa30d6a26a5553349ddf99" gracePeriod=600 Jan 23 18:17:37 crc kubenswrapper[4688]: I0123 18:17:37.629462 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"0cadaf13fa81ded2e3a1c3d78a3ae5a1fa4294316faa30d6a26a5553349ddf99"} Jan 23 18:17:37 crc kubenswrapper[4688]: I0123 18:17:37.629950 4688 scope.go:117] "RemoveContainer" containerID="ad9c21b368ee92434601444d389b9dec44412e1e582cc02198cf51f288c6f04f" Jan 23 18:17:37 crc kubenswrapper[4688]: I0123 18:17:37.629380 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="0cadaf13fa81ded2e3a1c3d78a3ae5a1fa4294316faa30d6a26a5553349ddf99" exitCode=0 Jan 23 18:17:38 crc kubenswrapper[4688]: I0123 18:17:38.643038 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"8c8c19ed1c7be125088def7ce3f0a64b978aa806db3742b6ac615e8c4bfd5bae"} Jan 23 18:17:47 crc kubenswrapper[4688]: I0123 18:17:47.365587 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bl7rf" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.168827 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g"] Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.170345 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.173027 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.184256 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g"] Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.314123 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.314249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.314291 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9l4g\" (UniqueName: \"kubernetes.io/projected/93e09072-68c1-41dd-8bf2-b939b18899b2-kube-api-access-q9l4g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.416769 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.416093 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.416894 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9l4g\" (UniqueName: \"kubernetes.io/projected/93e09072-68c1-41dd-8bf2-b939b18899b2-kube-api-access-q9l4g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.417469 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.417866 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.438959 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9l4g\" (UniqueName: \"kubernetes.io/projected/93e09072-68c1-41dd-8bf2-b939b18899b2-kube-api-access-q9l4g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.491361 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.702917 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g"] Jan 23 18:17:48 crc kubenswrapper[4688]: I0123 18:17:48.726499 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" event={"ID":"93e09072-68c1-41dd-8bf2-b939b18899b2","Type":"ContainerStarted","Data":"916cf56974863f42ccb2b82a966300618108b7d04dabf637eaca2dc8ac2132fd"} Jan 23 18:17:49 crc kubenswrapper[4688]: I0123 18:17:49.736661 4688 generic.go:334] "Generic (PLEG): container finished" podID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerID="0bd744182f725697cbf37943912a6e4b5109f74c0f5a1ac91ffcd5fd08df296a" exitCode=0 Jan 23 18:17:49 crc kubenswrapper[4688]: I0123 18:17:49.736741 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" event={"ID":"93e09072-68c1-41dd-8bf2-b939b18899b2","Type":"ContainerDied","Data":"0bd744182f725697cbf37943912a6e4b5109f74c0f5a1ac91ffcd5fd08df296a"} Jan 23 18:17:51 crc kubenswrapper[4688]: I0123 18:17:51.751122 4688 generic.go:334] "Generic (PLEG): container finished" podID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerID="bb6b9a5383a2bec386f00c451b0982b24d742244349fd5d296aac76dbb253217" exitCode=0 Jan 23 18:17:51 crc kubenswrapper[4688]: I0123 18:17:51.751270 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" event={"ID":"93e09072-68c1-41dd-8bf2-b939b18899b2","Type":"ContainerDied","Data":"bb6b9a5383a2bec386f00c451b0982b24d742244349fd5d296aac76dbb253217"} Jan 23 18:17:52 crc kubenswrapper[4688]: I0123 18:17:52.761298 4688 generic.go:334] "Generic (PLEG): container finished" podID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerID="ec0c6ff368fa0c6b98fa6ff36a7cc8d5aff931b5d7e82fe2e9e3a8f1c4ab1358" exitCode=0 Jan 23 18:17:52 crc kubenswrapper[4688]: I0123 18:17:52.761383 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" event={"ID":"93e09072-68c1-41dd-8bf2-b939b18899b2","Type":"ContainerDied","Data":"ec0c6ff368fa0c6b98fa6ff36a7cc8d5aff931b5d7e82fe2e9e3a8f1c4ab1358"} Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.023333 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.211795 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-util\") pod \"93e09072-68c1-41dd-8bf2-b939b18899b2\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.212001 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-bundle\") pod \"93e09072-68c1-41dd-8bf2-b939b18899b2\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.212050 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9l4g\" (UniqueName: \"kubernetes.io/projected/93e09072-68c1-41dd-8bf2-b939b18899b2-kube-api-access-q9l4g\") pod \"93e09072-68c1-41dd-8bf2-b939b18899b2\" (UID: \"93e09072-68c1-41dd-8bf2-b939b18899b2\") " Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.215385 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-bundle" (OuterVolumeSpecName: "bundle") pod "93e09072-68c1-41dd-8bf2-b939b18899b2" (UID: "93e09072-68c1-41dd-8bf2-b939b18899b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.218144 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e09072-68c1-41dd-8bf2-b939b18899b2-kube-api-access-q9l4g" (OuterVolumeSpecName: "kube-api-access-q9l4g") pod "93e09072-68c1-41dd-8bf2-b939b18899b2" (UID: "93e09072-68c1-41dd-8bf2-b939b18899b2"). InnerVolumeSpecName "kube-api-access-q9l4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.223355 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-util" (OuterVolumeSpecName: "util") pod "93e09072-68c1-41dd-8bf2-b939b18899b2" (UID: "93e09072-68c1-41dd-8bf2-b939b18899b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.313506 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-util\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.313564 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93e09072-68c1-41dd-8bf2-b939b18899b2-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.313576 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9l4g\" (UniqueName: \"kubernetes.io/projected/93e09072-68c1-41dd-8bf2-b939b18899b2-kube-api-access-q9l4g\") on node \"crc\" DevicePath \"\"" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.777440 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" event={"ID":"93e09072-68c1-41dd-8bf2-b939b18899b2","Type":"ContainerDied","Data":"916cf56974863f42ccb2b82a966300618108b7d04dabf637eaca2dc8ac2132fd"} Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.777491 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916cf56974863f42ccb2b82a966300618108b7d04dabf637eaca2dc8ac2132fd" Jan 23 18:17:54 crc kubenswrapper[4688]: I0123 18:17:54.777522 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.813111 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw"] Jan 23 18:18:06 crc kubenswrapper[4688]: E0123 18:18:06.814266 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerName="extract" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.814283 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerName="extract" Jan 23 18:18:06 crc kubenswrapper[4688]: E0123 18:18:06.814300 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerName="util" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.814307 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerName="util" Jan 23 18:18:06 crc kubenswrapper[4688]: E0123 18:18:06.814326 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerName="pull" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.814333 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerName="pull" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.814446 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e09072-68c1-41dd-8bf2-b939b18899b2" containerName="extract" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.815045 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.818786 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-cljzb" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.818786 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.819003 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.833207 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw"] Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.874835 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h"] Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.875776 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.878768 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.879036 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nj72j" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.897563 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h"] Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.906029 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f"] Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.906693 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mnn\" (UniqueName: \"kubernetes.io/projected/505c5412-6a67-4596-ae6a-bbd51d146126-kube-api-access-z7mnn\") pod \"obo-prometheus-operator-68bc856cb9-fkspw\" (UID: \"505c5412-6a67-4596-ae6a-bbd51d146126\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.907293 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:06 crc kubenswrapper[4688]: I0123 18:18:06.928109 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.007856 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/318d598f-84d5-418c-b820-d7ade7fcc8de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f\" (UID: \"318d598f-84d5-418c-b820-d7ade7fcc8de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.007982 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mnn\" (UniqueName: \"kubernetes.io/projected/505c5412-6a67-4596-ae6a-bbd51d146126-kube-api-access-z7mnn\") pod \"obo-prometheus-operator-68bc856cb9-fkspw\" (UID: \"505c5412-6a67-4596-ae6a-bbd51d146126\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.008024 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/587391e1-2b8a-40a1-9106-cdda7cb8a2bd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h\" (UID: \"587391e1-2b8a-40a1-9106-cdda7cb8a2bd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.008050 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/587391e1-2b8a-40a1-9106-cdda7cb8a2bd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h\" (UID: \"587391e1-2b8a-40a1-9106-cdda7cb8a2bd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.008074 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/318d598f-84d5-418c-b820-d7ade7fcc8de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f\" (UID: \"318d598f-84d5-418c-b820-d7ade7fcc8de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.039209 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mnn\" (UniqueName: \"kubernetes.io/projected/505c5412-6a67-4596-ae6a-bbd51d146126-kube-api-access-z7mnn\") pod \"obo-prometheus-operator-68bc856cb9-fkspw\" (UID: \"505c5412-6a67-4596-ae6a-bbd51d146126\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.066788 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-86gvw"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.068301 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.070560 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.071953 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ssl5j" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.082370 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-86gvw"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.110344 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/587391e1-2b8a-40a1-9106-cdda7cb8a2bd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h\" (UID: \"587391e1-2b8a-40a1-9106-cdda7cb8a2bd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.110419 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/587391e1-2b8a-40a1-9106-cdda7cb8a2bd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h\" (UID: \"587391e1-2b8a-40a1-9106-cdda7cb8a2bd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.110460 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/318d598f-84d5-418c-b820-d7ade7fcc8de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f\" (UID: \"318d598f-84d5-418c-b820-d7ade7fcc8de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.110512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/318d598f-84d5-418c-b820-d7ade7fcc8de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f\" (UID: \"318d598f-84d5-418c-b820-d7ade7fcc8de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.117581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/318d598f-84d5-418c-b820-d7ade7fcc8de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f\" (UID: \"318d598f-84d5-418c-b820-d7ade7fcc8de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.120395 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/318d598f-84d5-418c-b820-d7ade7fcc8de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f\" (UID: \"318d598f-84d5-418c-b820-d7ade7fcc8de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.131515 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/587391e1-2b8a-40a1-9106-cdda7cb8a2bd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h\" (UID: \"587391e1-2b8a-40a1-9106-cdda7cb8a2bd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.136743 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.139823 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/587391e1-2b8a-40a1-9106-cdda7cb8a2bd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h\" (UID: \"587391e1-2b8a-40a1-9106-cdda7cb8a2bd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.195838 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.211957 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27mk\" (UniqueName: \"kubernetes.io/projected/8f8e5732-68b1-4f4e-906c-303e1eb20baf-kube-api-access-p27mk\") pod \"observability-operator-59bdc8b94-86gvw\" (UID: \"8f8e5732-68b1-4f4e-906c-303e1eb20baf\") " pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.212055 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f8e5732-68b1-4f4e-906c-303e1eb20baf-observability-operator-tls\") pod \"observability-operator-59bdc8b94-86gvw\" (UID: \"8f8e5732-68b1-4f4e-906c-303e1eb20baf\") " pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.241623 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.270582 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pgd8p"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.271568 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.278300 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-vf66w" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.295954 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pgd8p"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.314488 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27mk\" (UniqueName: \"kubernetes.io/projected/8f8e5732-68b1-4f4e-906c-303e1eb20baf-kube-api-access-p27mk\") pod \"observability-operator-59bdc8b94-86gvw\" (UID: \"8f8e5732-68b1-4f4e-906c-303e1eb20baf\") " pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.314596 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f8e5732-68b1-4f4e-906c-303e1eb20baf-observability-operator-tls\") pod \"observability-operator-59bdc8b94-86gvw\" (UID: \"8f8e5732-68b1-4f4e-906c-303e1eb20baf\") " pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.325534 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f8e5732-68b1-4f4e-906c-303e1eb20baf-observability-operator-tls\") pod \"observability-operator-59bdc8b94-86gvw\" (UID: \"8f8e5732-68b1-4f4e-906c-303e1eb20baf\") " pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.349516 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27mk\" (UniqueName: \"kubernetes.io/projected/8f8e5732-68b1-4f4e-906c-303e1eb20baf-kube-api-access-p27mk\") pod \"observability-operator-59bdc8b94-86gvw\" (UID: \"8f8e5732-68b1-4f4e-906c-303e1eb20baf\") " pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.387397 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.415969 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbrtc\" (UniqueName: \"kubernetes.io/projected/9cb38355-91e8-4856-abfa-b307e3f1909b-kube-api-access-wbrtc\") pod \"perses-operator-5bf474d74f-pgd8p\" (UID: \"9cb38355-91e8-4856-abfa-b307e3f1909b\") " pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.416071 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cb38355-91e8-4856-abfa-b307e3f1909b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pgd8p\" (UID: \"9cb38355-91e8-4856-abfa-b307e3f1909b\") " pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.518771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cb38355-91e8-4856-abfa-b307e3f1909b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pgd8p\" (UID: \"9cb38355-91e8-4856-abfa-b307e3f1909b\") " pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.518869 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbrtc\" (UniqueName: \"kubernetes.io/projected/9cb38355-91e8-4856-abfa-b307e3f1909b-kube-api-access-wbrtc\") pod \"perses-operator-5bf474d74f-pgd8p\" (UID: \"9cb38355-91e8-4856-abfa-b307e3f1909b\") " pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.520204 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/9cb38355-91e8-4856-abfa-b307e3f1909b-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pgd8p\" (UID: \"9cb38355-91e8-4856-abfa-b307e3f1909b\") " pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.541318 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbrtc\" (UniqueName: \"kubernetes.io/projected/9cb38355-91e8-4856-abfa-b307e3f1909b-kube-api-access-wbrtc\") pod \"perses-operator-5bf474d74f-pgd8p\" (UID: \"9cb38355-91e8-4856-abfa-b307e3f1909b\") " pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.608367 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.661177 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.749874 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h"] Jan 23 18:18:07 crc kubenswrapper[4688]: W0123 18:18:07.765173 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod587391e1_2b8a_40a1_9106_cdda7cb8a2bd.slice/crio-8552500dff974977eee4cb5409f2ff7190ac47c2b60441ccb1a15bb6bab308b9 WatchSource:0}: Error finding container 8552500dff974977eee4cb5409f2ff7190ac47c2b60441ccb1a15bb6bab308b9: Status 404 returned error can't find the container with id 8552500dff974977eee4cb5409f2ff7190ac47c2b60441ccb1a15bb6bab308b9 Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.771713 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-86gvw"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.814722 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw"] Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.895838 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" event={"ID":"505c5412-6a67-4596-ae6a-bbd51d146126","Type":"ContainerStarted","Data":"a43aa25cc871c9bb3553c540f5aad08241d88055157121d09aacd4fec98420a4"} Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.897302 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" event={"ID":"318d598f-84d5-418c-b820-d7ade7fcc8de","Type":"ContainerStarted","Data":"eca5a20ac5bb4fd6b9f73eedc8183bf956816c36b45b631a231d83ca70da61e8"} Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.898229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" event={"ID":"8f8e5732-68b1-4f4e-906c-303e1eb20baf","Type":"ContainerStarted","Data":"b27b80956ccc4b655daf31f282efa4fb974be25b4c3e940b08fda94109ce88a2"} Jan 23 18:18:07 crc kubenswrapper[4688]: I0123 18:18:07.900082 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" event={"ID":"587391e1-2b8a-40a1-9106-cdda7cb8a2bd","Type":"ContainerStarted","Data":"8552500dff974977eee4cb5409f2ff7190ac47c2b60441ccb1a15bb6bab308b9"} Jan 23 18:18:08 crc kubenswrapper[4688]: I0123 18:18:08.008852 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pgd8p"] Jan 23 18:18:08 crc kubenswrapper[4688]: W0123 18:18:08.022882 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cb38355_91e8_4856_abfa_b307e3f1909b.slice/crio-57ca3f927dbd24977b1e50338b133804d51104b9dc1674851239db906cb5a713 WatchSource:0}: Error finding container 57ca3f927dbd24977b1e50338b133804d51104b9dc1674851239db906cb5a713: Status 404 returned error can't find the container with id 57ca3f927dbd24977b1e50338b133804d51104b9dc1674851239db906cb5a713 Jan 23 18:18:08 crc kubenswrapper[4688]: I0123 18:18:08.968263 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" event={"ID":"9cb38355-91e8-4856-abfa-b307e3f1909b","Type":"ContainerStarted","Data":"57ca3f927dbd24977b1e50338b133804d51104b9dc1674851239db906cb5a713"} Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.057368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" event={"ID":"8f8e5732-68b1-4f4e-906c-303e1eb20baf","Type":"ContainerStarted","Data":"3a3ccbb0d810046e33633d6aac2678e58339ddc5da92a72db619d592f8bb1e8f"} Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.058341 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.059056 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" event={"ID":"587391e1-2b8a-40a1-9106-cdda7cb8a2bd","Type":"ContainerStarted","Data":"ae42fa7d004187edfba4dab10d19fe7c8f630ed6a8810a683660f6ff617a6200"} Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.059612 4688 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-86gvw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.37:8081/healthz\": dial tcp 10.217.0.37:8081: connect: connection refused" start-of-body= Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.059666 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" podUID="8f8e5732-68b1-4f4e-906c-303e1eb20baf" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.37:8081/healthz\": dial tcp 10.217.0.37:8081: connect: connection refused" Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.060579 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" event={"ID":"9cb38355-91e8-4856-abfa-b307e3f1909b","Type":"ContainerStarted","Data":"dc398f55992c65d9225ad07bf8774d97ea0fc6087fe2f6ac1f0f01f6a05c70c4"} Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.060653 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.062289 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" event={"ID":"318d598f-84d5-418c-b820-d7ade7fcc8de","Type":"ContainerStarted","Data":"e1102b2ae08fc9babfe56d75ca1dc18ef2915804d2f9ce0c21afa70662b5e8a2"} Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.090414 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" podStartSLOduration=1.127428014 podStartE2EDuration="12.090381925s" podCreationTimestamp="2026-01-23 18:18:07 +0000 UTC" firstStartedPulling="2026-01-23 18:18:07.81936517 +0000 UTC m=+682.815189611" lastFinishedPulling="2026-01-23 18:18:18.782319081 +0000 UTC m=+693.778143522" observedRunningTime="2026-01-23 18:18:19.08562164 +0000 UTC m=+694.081446101" watchObservedRunningTime="2026-01-23 18:18:19.090381925 +0000 UTC m=+694.086206376" Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.115444 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f" podStartSLOduration=2.088756969 podStartE2EDuration="13.11541448s" podCreationTimestamp="2026-01-23 18:18:06 +0000 UTC" firstStartedPulling="2026-01-23 18:18:07.672953612 +0000 UTC m=+682.668778053" lastFinishedPulling="2026-01-23 18:18:18.699611123 +0000 UTC m=+693.695435564" observedRunningTime="2026-01-23 18:18:19.112453469 +0000 UTC m=+694.108277920" watchObservedRunningTime="2026-01-23 18:18:19.11541448 +0000 UTC m=+694.111238921" Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.189972 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h" podStartSLOduration=2.226658602 podStartE2EDuration="13.189939671s" podCreationTimestamp="2026-01-23 18:18:06 +0000 UTC" firstStartedPulling="2026-01-23 18:18:07.781866864 +0000 UTC m=+682.777691305" lastFinishedPulling="2026-01-23 18:18:18.745147923 +0000 UTC m=+693.740972374" observedRunningTime="2026-01-23 18:18:19.188961748 +0000 UTC m=+694.184786219" watchObservedRunningTime="2026-01-23 18:18:19.189939671 +0000 UTC m=+694.185764122" Jan 23 18:18:19 crc kubenswrapper[4688]: I0123 18:18:19.243770 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" podStartSLOduration=1.5535550809999998 podStartE2EDuration="12.243742661s" podCreationTimestamp="2026-01-23 18:18:07 +0000 UTC" firstStartedPulling="2026-01-23 18:18:08.028666058 +0000 UTC m=+683.024490499" lastFinishedPulling="2026-01-23 18:18:18.718853648 +0000 UTC m=+693.714678079" observedRunningTime="2026-01-23 18:18:19.237759867 +0000 UTC m=+694.233584308" watchObservedRunningTime="2026-01-23 18:18:19.243742661 +0000 UTC m=+694.239567102" Jan 23 18:18:20 crc kubenswrapper[4688]: I0123 18:18:20.071666 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" event={"ID":"505c5412-6a67-4596-ae6a-bbd51d146126","Type":"ContainerStarted","Data":"e37e1956f267c8631cd95887017bc9068f42725c6952be23d3c620546cbf6a38"} Jan 23 18:18:20 crc kubenswrapper[4688]: I0123 18:18:20.074170 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-86gvw" Jan 23 18:18:20 crc kubenswrapper[4688]: I0123 18:18:20.116293 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fkspw" podStartSLOduration=3.155803804 podStartE2EDuration="14.116260434s" podCreationTimestamp="2026-01-23 18:18:06 +0000 UTC" firstStartedPulling="2026-01-23 18:18:07.800768041 +0000 UTC m=+682.796592482" lastFinishedPulling="2026-01-23 18:18:18.761224671 +0000 UTC m=+693.757049112" observedRunningTime="2026-01-23 18:18:20.106766695 +0000 UTC m=+695.102591136" watchObservedRunningTime="2026-01-23 18:18:20.116260434 +0000 UTC m=+695.112084875" Jan 23 18:18:27 crc kubenswrapper[4688]: I0123 18:18:27.612315 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-pgd8p" Jan 23 18:18:42 crc kubenswrapper[4688]: I0123 18:18:42.170122 4688 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.595083 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2"] Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.597385 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.599946 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.613474 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2"] Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.619455 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.619504 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.619615 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmhb\" (UniqueName: \"kubernetes.io/projected/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-kube-api-access-9hmhb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.721137 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.721242 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.721331 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmhb\" (UniqueName: \"kubernetes.io/projected/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-kube-api-access-9hmhb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.721878 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.721898 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.743143 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmhb\" (UniqueName: \"kubernetes.io/projected/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-kube-api-access-9hmhb\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:40 crc kubenswrapper[4688]: I0123 18:19:40.915838 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:41 crc kubenswrapper[4688]: I0123 18:19:41.180140 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2"] Jan 23 18:19:41 crc kubenswrapper[4688]: I0123 18:19:41.645341 4688 generic.go:334] "Generic (PLEG): container finished" podID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerID="85fd31a731c97b42e6d6d8f6bb50d458cae80b25d0873d77d9d7872d3d31474f" exitCode=0 Jan 23 18:19:41 crc kubenswrapper[4688]: I0123 18:19:41.645423 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" event={"ID":"17ee3cf3-5aa8-443c-b750-b01f9aa16af4","Type":"ContainerDied","Data":"85fd31a731c97b42e6d6d8f6bb50d458cae80b25d0873d77d9d7872d3d31474f"} Jan 23 18:19:41 crc kubenswrapper[4688]: I0123 18:19:41.645865 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" event={"ID":"17ee3cf3-5aa8-443c-b750-b01f9aa16af4","Type":"ContainerStarted","Data":"8082f628398d9861509971b60bd7ebf1478c9dc7066bd3c5a40cd96962e4f5a8"} Jan 23 18:19:42 crc kubenswrapper[4688]: I0123 18:19:42.946881 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mgrh7"] Jan 23 18:19:42 crc kubenswrapper[4688]: I0123 18:19:42.948345 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:42 crc kubenswrapper[4688]: I0123 18:19:42.972241 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mgrh7"] Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.066146 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7cz\" (UniqueName: \"kubernetes.io/projected/30dafd64-5d42-48b1-9a54-f06356c9b12a-kube-api-access-df7cz\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.066415 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-catalog-content\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.066623 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-utilities\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.168742 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-utilities\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.169330 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-utilities\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.169392 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df7cz\" (UniqueName: \"kubernetes.io/projected/30dafd64-5d42-48b1-9a54-f06356c9b12a-kube-api-access-df7cz\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.169480 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-catalog-content\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.170103 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-catalog-content\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.198716 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7cz\" (UniqueName: \"kubernetes.io/projected/30dafd64-5d42-48b1-9a54-f06356c9b12a-kube-api-access-df7cz\") pod \"redhat-operators-mgrh7\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.270997 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.508942 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mgrh7"] Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.663570 4688 generic.go:334] "Generic (PLEG): container finished" podID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerID="71d0c5fa47840f9256adc244e0afbba7d781fc265b2fc0fe3887ca62247c072e" exitCode=0 Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.663662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" event={"ID":"17ee3cf3-5aa8-443c-b750-b01f9aa16af4","Type":"ContainerDied","Data":"71d0c5fa47840f9256adc244e0afbba7d781fc265b2fc0fe3887ca62247c072e"} Jan 23 18:19:43 crc kubenswrapper[4688]: I0123 18:19:43.666138 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgrh7" event={"ID":"30dafd64-5d42-48b1-9a54-f06356c9b12a","Type":"ContainerStarted","Data":"077cb08bd3ec1adeaa96ac95d32bbe2afb4b002a782fc3225230285e257f2f95"} Jan 23 18:19:44 crc kubenswrapper[4688]: I0123 18:19:44.675158 4688 generic.go:334] "Generic (PLEG): container finished" podID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerID="a3e2fcc5f07793b52750c80f2b77db118057919a8c462cc1edb65af11b747ebf" exitCode=0 Jan 23 18:19:44 crc kubenswrapper[4688]: I0123 18:19:44.675293 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgrh7" event={"ID":"30dafd64-5d42-48b1-9a54-f06356c9b12a","Type":"ContainerDied","Data":"a3e2fcc5f07793b52750c80f2b77db118057919a8c462cc1edb65af11b747ebf"} Jan 23 18:19:44 crc kubenswrapper[4688]: I0123 18:19:44.678653 4688 generic.go:334] "Generic (PLEG): container finished" podID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerID="aa29763daf245309d41ecf4ff09b4b26dda0a367652b4402ee4908b600ad703b" exitCode=0 Jan 23 18:19:44 crc kubenswrapper[4688]: I0123 18:19:44.678733 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" event={"ID":"17ee3cf3-5aa8-443c-b750-b01f9aa16af4","Type":"ContainerDied","Data":"aa29763daf245309d41ecf4ff09b4b26dda0a367652b4402ee4908b600ad703b"} Jan 23 18:19:45 crc kubenswrapper[4688]: I0123 18:19:45.920546 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.011789 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hmhb\" (UniqueName: \"kubernetes.io/projected/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-kube-api-access-9hmhb\") pod \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.012030 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-bundle\") pod \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.012067 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-util\") pod \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\" (UID: \"17ee3cf3-5aa8-443c-b750-b01f9aa16af4\") " Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.012810 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-bundle" (OuterVolumeSpecName: "bundle") pod "17ee3cf3-5aa8-443c-b750-b01f9aa16af4" (UID: "17ee3cf3-5aa8-443c-b750-b01f9aa16af4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.019904 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-kube-api-access-9hmhb" (OuterVolumeSpecName: "kube-api-access-9hmhb") pod "17ee3cf3-5aa8-443c-b750-b01f9aa16af4" (UID: "17ee3cf3-5aa8-443c-b750-b01f9aa16af4"). InnerVolumeSpecName "kube-api-access-9hmhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.026059 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-util" (OuterVolumeSpecName: "util") pod "17ee3cf3-5aa8-443c-b750-b01f9aa16af4" (UID: "17ee3cf3-5aa8-443c-b750-b01f9aa16af4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.113376 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.113413 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-util\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.113427 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hmhb\" (UniqueName: \"kubernetes.io/projected/17ee3cf3-5aa8-443c-b750-b01f9aa16af4-kube-api-access-9hmhb\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.695667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" event={"ID":"17ee3cf3-5aa8-443c-b750-b01f9aa16af4","Type":"ContainerDied","Data":"8082f628398d9861509971b60bd7ebf1478c9dc7066bd3c5a40cd96962e4f5a8"} Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.695769 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8082f628398d9861509971b60bd7ebf1478c9dc7066bd3c5a40cd96962e4f5a8" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.695804 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2" Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.698458 4688 generic.go:334] "Generic (PLEG): container finished" podID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerID="7fb346891017a3b770a9fdda2cfa34e6aab492ad20cb30002013cc432a722f82" exitCode=0 Jan 23 18:19:46 crc kubenswrapper[4688]: I0123 18:19:46.698538 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgrh7" event={"ID":"30dafd64-5d42-48b1-9a54-f06356c9b12a","Type":"ContainerDied","Data":"7fb346891017a3b770a9fdda2cfa34e6aab492ad20cb30002013cc432a722f82"} Jan 23 18:19:47 crc kubenswrapper[4688]: I0123 18:19:47.707558 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgrh7" event={"ID":"30dafd64-5d42-48b1-9a54-f06356c9b12a","Type":"ContainerStarted","Data":"055c5dda7d4d6f8617ad822abaa0d349ece16ecbbdf924f37f4de7c8f21a1d39"} Jan 23 18:19:47 crc kubenswrapper[4688]: I0123 18:19:47.729172 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mgrh7" podStartSLOduration=3.332192492 podStartE2EDuration="5.729037421s" podCreationTimestamp="2026-01-23 18:19:42 +0000 UTC" firstStartedPulling="2026-01-23 18:19:44.679715623 +0000 UTC m=+779.675540064" lastFinishedPulling="2026-01-23 18:19:47.076560532 +0000 UTC m=+782.072384993" observedRunningTime="2026-01-23 18:19:47.728113635 +0000 UTC m=+782.723938076" watchObservedRunningTime="2026-01-23 18:19:47.729037421 +0000 UTC m=+782.724861882" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.856898 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l8trt"] Jan 23 18:19:49 crc kubenswrapper[4688]: E0123 18:19:49.857828 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerName="util" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.857850 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerName="util" Jan 23 18:19:49 crc kubenswrapper[4688]: E0123 18:19:49.857862 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerName="extract" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.857871 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerName="extract" Jan 23 18:19:49 crc kubenswrapper[4688]: E0123 18:19:49.857889 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerName="pull" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.857897 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerName="pull" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.858042 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ee3cf3-5aa8-443c-b750-b01f9aa16af4" containerName="extract" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.858819 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.861467 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7zdwh" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.864057 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.864432 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.873637 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zz54\" (UniqueName: \"kubernetes.io/projected/645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81-kube-api-access-6zz54\") pod \"nmstate-operator-646758c888-l8trt\" (UID: \"645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81\") " pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.876894 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l8trt"] Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.975624 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zz54\" (UniqueName: \"kubernetes.io/projected/645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81-kube-api-access-6zz54\") pod \"nmstate-operator-646758c888-l8trt\" (UID: \"645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81\") " pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" Jan 23 18:19:49 crc kubenswrapper[4688]: I0123 18:19:49.998770 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zz54\" (UniqueName: \"kubernetes.io/projected/645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81-kube-api-access-6zz54\") pod \"nmstate-operator-646758c888-l8trt\" (UID: \"645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81\") " pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" Jan 23 18:19:50 crc kubenswrapper[4688]: I0123 18:19:50.175995 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" Jan 23 18:19:50 crc kubenswrapper[4688]: I0123 18:19:50.420812 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l8trt"] Jan 23 18:19:50 crc kubenswrapper[4688]: I0123 18:19:50.729503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" event={"ID":"645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81","Type":"ContainerStarted","Data":"1c389fdec99047345f550d0bf921af9c92c6e60927db95fc334ca40093371ee1"} Jan 23 18:19:53 crc kubenswrapper[4688]: I0123 18:19:53.271299 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:53 crc kubenswrapper[4688]: I0123 18:19:53.271754 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:53 crc kubenswrapper[4688]: I0123 18:19:53.322533 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:53 crc kubenswrapper[4688]: I0123 18:19:53.796309 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:55 crc kubenswrapper[4688]: I0123 18:19:55.725999 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mgrh7"] Jan 23 18:19:55 crc kubenswrapper[4688]: I0123 18:19:55.760493 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mgrh7" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="registry-server" containerID="cri-o://055c5dda7d4d6f8617ad822abaa0d349ece16ecbbdf924f37f4de7c8f21a1d39" gracePeriod=2 Jan 23 18:19:58 crc kubenswrapper[4688]: I0123 18:19:58.787538 4688 generic.go:334] "Generic (PLEG): container finished" podID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerID="055c5dda7d4d6f8617ad822abaa0d349ece16ecbbdf924f37f4de7c8f21a1d39" exitCode=0 Jan 23 18:19:58 crc kubenswrapper[4688]: I0123 18:19:58.787635 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgrh7" event={"ID":"30dafd64-5d42-48b1-9a54-f06356c9b12a","Type":"ContainerDied","Data":"055c5dda7d4d6f8617ad822abaa0d349ece16ecbbdf924f37f4de7c8f21a1d39"} Jan 23 18:19:58 crc kubenswrapper[4688]: I0123 18:19:58.925863 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.023292 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-utilities\") pod \"30dafd64-5d42-48b1-9a54-f06356c9b12a\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.023471 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df7cz\" (UniqueName: \"kubernetes.io/projected/30dafd64-5d42-48b1-9a54-f06356c9b12a-kube-api-access-df7cz\") pod \"30dafd64-5d42-48b1-9a54-f06356c9b12a\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.023541 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-catalog-content\") pod \"30dafd64-5d42-48b1-9a54-f06356c9b12a\" (UID: \"30dafd64-5d42-48b1-9a54-f06356c9b12a\") " Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.024485 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-utilities" (OuterVolumeSpecName: "utilities") pod "30dafd64-5d42-48b1-9a54-f06356c9b12a" (UID: "30dafd64-5d42-48b1-9a54-f06356c9b12a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.031695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dafd64-5d42-48b1-9a54-f06356c9b12a-kube-api-access-df7cz" (OuterVolumeSpecName: "kube-api-access-df7cz") pod "30dafd64-5d42-48b1-9a54-f06356c9b12a" (UID: "30dafd64-5d42-48b1-9a54-f06356c9b12a"). InnerVolumeSpecName "kube-api-access-df7cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.125793 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df7cz\" (UniqueName: \"kubernetes.io/projected/30dafd64-5d42-48b1-9a54-f06356c9b12a-kube-api-access-df7cz\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.125847 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.156489 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30dafd64-5d42-48b1-9a54-f06356c9b12a" (UID: "30dafd64-5d42-48b1-9a54-f06356c9b12a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.228045 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30dafd64-5d42-48b1-9a54-f06356c9b12a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.800132 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mgrh7" event={"ID":"30dafd64-5d42-48b1-9a54-f06356c9b12a","Type":"ContainerDied","Data":"077cb08bd3ec1adeaa96ac95d32bbe2afb4b002a782fc3225230285e257f2f95"} Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.800246 4688 scope.go:117] "RemoveContainer" containerID="055c5dda7d4d6f8617ad822abaa0d349ece16ecbbdf924f37f4de7c8f21a1d39" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.800419 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mgrh7" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.834497 4688 scope.go:117] "RemoveContainer" containerID="7fb346891017a3b770a9fdda2cfa34e6aab492ad20cb30002013cc432a722f82" Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.842213 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mgrh7"] Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.852540 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mgrh7"] Jan 23 18:19:59 crc kubenswrapper[4688]: I0123 18:19:59.859632 4688 scope.go:117] "RemoveContainer" containerID="a3e2fcc5f07793b52750c80f2b77db118057919a8c462cc1edb65af11b747ebf" Jan 23 18:20:01 crc kubenswrapper[4688]: I0123 18:20:01.363878 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" path="/var/lib/kubelet/pods/30dafd64-5d42-48b1-9a54-f06356c9b12a/volumes" Jan 23 18:20:06 crc kubenswrapper[4688]: I0123 18:20:06.965696 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:20:06 crc kubenswrapper[4688]: I0123 18:20:06.966055 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:14 crc kubenswrapper[4688]: I0123 18:20:14.915962 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" event={"ID":"645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81","Type":"ContainerStarted","Data":"00b52ce8724ce07fe05fcf18ebae24663f213658827baebf9237cc0f9b803f2e"} Jan 23 18:20:14 crc kubenswrapper[4688]: I0123 18:20:14.949780 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-l8trt" podStartSLOduration=2.488109999 podStartE2EDuration="25.949755386s" podCreationTimestamp="2026-01-23 18:19:49 +0000 UTC" firstStartedPulling="2026-01-23 18:19:50.438355733 +0000 UTC m=+785.434180184" lastFinishedPulling="2026-01-23 18:20:13.90000112 +0000 UTC m=+808.895825571" observedRunningTime="2026-01-23 18:20:14.947579145 +0000 UTC m=+809.943403586" watchObservedRunningTime="2026-01-23 18:20:14.949755386 +0000 UTC m=+809.945579827" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.128904 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8dl6z"] Jan 23 18:20:16 crc kubenswrapper[4688]: E0123 18:20:16.129355 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="extract-content" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.129373 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="extract-content" Jan 23 18:20:16 crc kubenswrapper[4688]: E0123 18:20:16.129386 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="extract-utilities" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.129394 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="extract-utilities" Jan 23 18:20:16 crc kubenswrapper[4688]: E0123 18:20:16.129411 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="registry-server" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.129435 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="registry-server" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.129601 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="30dafd64-5d42-48b1-9a54-f06356c9b12a" containerName="registry-server" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.133633 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.142645 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.146431 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-chjhr" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.149778 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.155929 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.164748 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8dl6z"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.189482 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4hkd8"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.191025 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.197314 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.243177 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xml\" (UniqueName: \"kubernetes.io/projected/a07585a4-2f3a-4062-9083-c64fcc9463a3-kube-api-access-l2xml\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.244417 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c43497a2-9efb-47c2-b161-88cfe2b1aabb-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wzgkn\" (UID: \"c43497a2-9efb-47c2-b161-88cfe2b1aabb\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.244497 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvm4n\" (UniqueName: \"kubernetes.io/projected/c43497a2-9efb-47c2-b161-88cfe2b1aabb-kube-api-access-rvm4n\") pod \"nmstate-webhook-8474b5b9d8-wzgkn\" (UID: \"c43497a2-9efb-47c2-b161-88cfe2b1aabb\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.244534 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfwd\" (UniqueName: \"kubernetes.io/projected/c65c520e-8672-463c-9337-3be6c949d06f-kube-api-access-jhfwd\") pod \"nmstate-metrics-54757c584b-8dl6z\" (UID: \"c65c520e-8672-463c-9337-3be6c949d06f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.244579 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-nmstate-lock\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.244689 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-ovs-socket\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.244762 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-dbus-socket\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.312362 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.314519 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.317176 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ck66k" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.317569 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.318759 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.323124 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.346805 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvm4n\" (UniqueName: \"kubernetes.io/projected/c43497a2-9efb-47c2-b161-88cfe2b1aabb-kube-api-access-rvm4n\") pod \"nmstate-webhook-8474b5b9d8-wzgkn\" (UID: \"c43497a2-9efb-47c2-b161-88cfe2b1aabb\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.346881 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhfwd\" (UniqueName: \"kubernetes.io/projected/c65c520e-8672-463c-9337-3be6c949d06f-kube-api-access-jhfwd\") pod \"nmstate-metrics-54757c584b-8dl6z\" (UID: \"c65c520e-8672-463c-9337-3be6c949d06f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.346926 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-nmstate-lock\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.346996 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-ovs-socket\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.347039 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-dbus-socket\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.347075 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2xml\" (UniqueName: \"kubernetes.io/projected/a07585a4-2f3a-4062-9083-c64fcc9463a3-kube-api-access-l2xml\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.347103 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c43497a2-9efb-47c2-b161-88cfe2b1aabb-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wzgkn\" (UID: \"c43497a2-9efb-47c2-b161-88cfe2b1aabb\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: E0123 18:20:16.347318 4688 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 18:20:16 crc kubenswrapper[4688]: E0123 18:20:16.347405 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c43497a2-9efb-47c2-b161-88cfe2b1aabb-tls-key-pair podName:c43497a2-9efb-47c2-b161-88cfe2b1aabb nodeName:}" failed. No retries permitted until 2026-01-23 18:20:16.847372391 +0000 UTC m=+811.843196832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c43497a2-9efb-47c2-b161-88cfe2b1aabb-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-wzgkn" (UID: "c43497a2-9efb-47c2-b161-88cfe2b1aabb") : secret "openshift-nmstate-webhook" not found Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.348147 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-nmstate-lock\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.348214 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-ovs-socket\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.348554 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a07585a4-2f3a-4062-9083-c64fcc9463a3-dbus-socket\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.372706 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvm4n\" (UniqueName: \"kubernetes.io/projected/c43497a2-9efb-47c2-b161-88cfe2b1aabb-kube-api-access-rvm4n\") pod \"nmstate-webhook-8474b5b9d8-wzgkn\" (UID: \"c43497a2-9efb-47c2-b161-88cfe2b1aabb\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.372796 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2xml\" (UniqueName: \"kubernetes.io/projected/a07585a4-2f3a-4062-9083-c64fcc9463a3-kube-api-access-l2xml\") pod \"nmstate-handler-4hkd8\" (UID: \"a07585a4-2f3a-4062-9083-c64fcc9463a3\") " pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.379438 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhfwd\" (UniqueName: \"kubernetes.io/projected/c65c520e-8672-463c-9337-3be6c949d06f-kube-api-access-jhfwd\") pod \"nmstate-metrics-54757c584b-8dl6z\" (UID: \"c65c520e-8672-463c-9337-3be6c949d06f\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.448878 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0ba8c497-753e-46c1-b423-cd7cd1b3616e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.448980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgtjl\" (UniqueName: \"kubernetes.io/projected/0ba8c497-753e-46c1-b423-cd7cd1b3616e-kube-api-access-cgtjl\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.449045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba8c497-753e-46c1-b423-cd7cd1b3616e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.455790 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.518786 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.523172 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cb499fd5c-m4wbz"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.524458 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.536739 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb499fd5c-m4wbz"] Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.550320 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgtjl\" (UniqueName: \"kubernetes.io/projected/0ba8c497-753e-46c1-b423-cd7cd1b3616e-kube-api-access-cgtjl\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.550429 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba8c497-753e-46c1-b423-cd7cd1b3616e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.550523 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0ba8c497-753e-46c1-b423-cd7cd1b3616e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.551791 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0ba8c497-753e-46c1-b423-cd7cd1b3616e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.557422 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ba8c497-753e-46c1-b423-cd7cd1b3616e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: W0123 18:20:16.567405 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07585a4_2f3a_4062_9083_c64fcc9463a3.slice/crio-ca9328e3a5892303c9962599b23af439e40fc66614068a0b361c383b4f02f7f7 WatchSource:0}: Error finding container ca9328e3a5892303c9962599b23af439e40fc66614068a0b361c383b4f02f7f7: Status 404 returned error can't find the container with id ca9328e3a5892303c9962599b23af439e40fc66614068a0b361c383b4f02f7f7 Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.580016 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgtjl\" (UniqueName: \"kubernetes.io/projected/0ba8c497-753e-46c1-b423-cd7cd1b3616e-kube-api-access-cgtjl\") pod \"nmstate-console-plugin-7754f76f8b-wcxg2\" (UID: \"0ba8c497-753e-46c1-b423-cd7cd1b3616e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.635632 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.656734 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-service-ca\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.656926 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-oauth-config\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.656993 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-serving-cert\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.657041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-trusted-ca-bundle\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.657070 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-oauth-serving-cert\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.657109 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trl2q\" (UniqueName: \"kubernetes.io/projected/adb60503-15d7-4af2-9c83-bd4c9e9d978b-kube-api-access-trl2q\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.657242 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-config\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.758938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-service-ca\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.759063 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-oauth-config\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.759098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-serving-cert\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.759146 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-trusted-ca-bundle\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.759163 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-oauth-serving-cert\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.759204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trl2q\" (UniqueName: \"kubernetes.io/projected/adb60503-15d7-4af2-9c83-bd4c9e9d978b-kube-api-access-trl2q\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.759255 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-config\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.760576 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-config\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.760989 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-oauth-serving-cert\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.761639 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-service-ca\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.761992 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adb60503-15d7-4af2-9c83-bd4c9e9d978b-trusted-ca-bundle\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.769219 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-oauth-config\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.777806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adb60503-15d7-4af2-9c83-bd4c9e9d978b-console-serving-cert\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.781120 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trl2q\" (UniqueName: \"kubernetes.io/projected/adb60503-15d7-4af2-9c83-bd4c9e9d978b-kube-api-access-trl2q\") pod \"console-5cb499fd5c-m4wbz\" (UID: \"adb60503-15d7-4af2-9c83-bd4c9e9d978b\") " pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.860807 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c43497a2-9efb-47c2-b161-88cfe2b1aabb-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wzgkn\" (UID: \"c43497a2-9efb-47c2-b161-88cfe2b1aabb\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.869531 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c43497a2-9efb-47c2-b161-88cfe2b1aabb-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wzgkn\" (UID: \"c43497a2-9efb-47c2-b161-88cfe2b1aabb\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.881879 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.904687 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2"] Jan 23 18:20:16 crc kubenswrapper[4688]: W0123 18:20:16.911223 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ba8c497_753e_46c1_b423_cd7cd1b3616e.slice/crio-0b2c3e3998113b51ab4821d14f235012566793902cf36c5a89e6976aca0a2097 WatchSource:0}: Error finding container 0b2c3e3998113b51ab4821d14f235012566793902cf36c5a89e6976aca0a2097: Status 404 returned error can't find the container with id 0b2c3e3998113b51ab4821d14f235012566793902cf36c5a89e6976aca0a2097 Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.929835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" event={"ID":"0ba8c497-753e-46c1-b423-cd7cd1b3616e","Type":"ContainerStarted","Data":"0b2c3e3998113b51ab4821d14f235012566793902cf36c5a89e6976aca0a2097"} Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.931840 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4hkd8" event={"ID":"a07585a4-2f3a-4062-9083-c64fcc9463a3","Type":"ContainerStarted","Data":"ca9328e3a5892303c9962599b23af439e40fc66614068a0b361c383b4f02f7f7"} Jan 23 18:20:16 crc kubenswrapper[4688]: I0123 18:20:16.984002 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8dl6z"] Jan 23 18:20:16 crc kubenswrapper[4688]: W0123 18:20:16.991010 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc65c520e_8672_463c_9337_3be6c949d06f.slice/crio-b378ec2f75bfaacb408945cd4122c015a42e49e6fedecf6f928f5d0dbb9d29f2 WatchSource:0}: Error finding container b378ec2f75bfaacb408945cd4122c015a42e49e6fedecf6f928f5d0dbb9d29f2: Status 404 returned error can't find the container with id b378ec2f75bfaacb408945cd4122c015a42e49e6fedecf6f928f5d0dbb9d29f2 Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.081709 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.102832 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cb499fd5c-m4wbz"] Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.339678 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn"] Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.940107 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" event={"ID":"c43497a2-9efb-47c2-b161-88cfe2b1aabb","Type":"ContainerStarted","Data":"1ef1b8542f38f755ed1fb8b519768e8ad118b77c67a2862f6f95e2dcee994b8e"} Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.942193 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb499fd5c-m4wbz" event={"ID":"adb60503-15d7-4af2-9c83-bd4c9e9d978b","Type":"ContainerStarted","Data":"982f6fc3a544e5a33946d180b24330c6a6dab68b5d56c4ca4b4b499e4d0e8019"} Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.942226 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cb499fd5c-m4wbz" event={"ID":"adb60503-15d7-4af2-9c83-bd4c9e9d978b","Type":"ContainerStarted","Data":"69cc17bb95524706e5886d908766e518cb3ce058a0beafcaa0850b2f0cf223c0"} Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.944322 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" event={"ID":"c65c520e-8672-463c-9337-3be6c949d06f","Type":"ContainerStarted","Data":"b378ec2f75bfaacb408945cd4122c015a42e49e6fedecf6f928f5d0dbb9d29f2"} Jan 23 18:20:17 crc kubenswrapper[4688]: I0123 18:20:17.968809 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cb499fd5c-m4wbz" podStartSLOduration=1.968772682 podStartE2EDuration="1.968772682s" podCreationTimestamp="2026-01-23 18:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:20:17.962909367 +0000 UTC m=+812.958733809" watchObservedRunningTime="2026-01-23 18:20:17.968772682 +0000 UTC m=+812.964597123" Jan 23 18:20:20 crc kubenswrapper[4688]: I0123 18:20:20.986617 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4hkd8" event={"ID":"a07585a4-2f3a-4062-9083-c64fcc9463a3","Type":"ContainerStarted","Data":"8cbfe89f644ced3641496987681ec6a1ddd9cdd8d08d764194f0734d9427038f"} Jan 23 18:20:20 crc kubenswrapper[4688]: I0123 18:20:20.987378 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:20 crc kubenswrapper[4688]: I0123 18:20:20.988992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" event={"ID":"c65c520e-8672-463c-9337-3be6c949d06f","Type":"ContainerStarted","Data":"cb2cba3991f543f0c7c4df64c38c9b471725e3dd170579c9f6940d53dc34d5d9"} Jan 23 18:20:20 crc kubenswrapper[4688]: I0123 18:20:20.991421 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" event={"ID":"0ba8c497-753e-46c1-b423-cd7cd1b3616e","Type":"ContainerStarted","Data":"34c710365aabfacbfdbd217d0d729056daf66ee1165d46145085a7a4a246d302"} Jan 23 18:20:20 crc kubenswrapper[4688]: I0123 18:20:20.993031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" event={"ID":"c43497a2-9efb-47c2-b161-88cfe2b1aabb","Type":"ContainerStarted","Data":"f48c6a7980bfc328bc3e0f0c089304e173d519d6fc01dcd1dd45e865521c6c85"} Jan 23 18:20:20 crc kubenswrapper[4688]: I0123 18:20:20.993149 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:21 crc kubenswrapper[4688]: I0123 18:20:21.008075 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4hkd8" podStartSLOduration=1.853606433 podStartE2EDuration="5.00805275s" podCreationTimestamp="2026-01-23 18:20:16 +0000 UTC" firstStartedPulling="2026-01-23 18:20:16.580287965 +0000 UTC m=+811.576112406" lastFinishedPulling="2026-01-23 18:20:19.734734282 +0000 UTC m=+814.730558723" observedRunningTime="2026-01-23 18:20:21.007046062 +0000 UTC m=+816.002870513" watchObservedRunningTime="2026-01-23 18:20:21.00805275 +0000 UTC m=+816.003877191" Jan 23 18:20:21 crc kubenswrapper[4688]: I0123 18:20:21.034907 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" podStartSLOduration=2.6499917120000003 podStartE2EDuration="5.034873365s" podCreationTimestamp="2026-01-23 18:20:16 +0000 UTC" firstStartedPulling="2026-01-23 18:20:17.352140173 +0000 UTC m=+812.347964614" lastFinishedPulling="2026-01-23 18:20:19.737021826 +0000 UTC m=+814.732846267" observedRunningTime="2026-01-23 18:20:21.023413862 +0000 UTC m=+816.019238323" watchObservedRunningTime="2026-01-23 18:20:21.034873365 +0000 UTC m=+816.030697806" Jan 23 18:20:24 crc kubenswrapper[4688]: I0123 18:20:24.016252 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" event={"ID":"c65c520e-8672-463c-9337-3be6c949d06f","Type":"ContainerStarted","Data":"01eb5f329e6c08390830dafa65c06fee45278e99d5f1b405cfa6b627b2aeaffd"} Jan 23 18:20:24 crc kubenswrapper[4688]: I0123 18:20:24.039551 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-8dl6z" podStartSLOduration=2.1606847829999998 podStartE2EDuration="8.039519387s" podCreationTimestamp="2026-01-23 18:20:16 +0000 UTC" firstStartedPulling="2026-01-23 18:20:16.993770859 +0000 UTC m=+811.989595300" lastFinishedPulling="2026-01-23 18:20:22.872605463 +0000 UTC m=+817.868429904" observedRunningTime="2026-01-23 18:20:24.034572047 +0000 UTC m=+819.030396508" watchObservedRunningTime="2026-01-23 18:20:24.039519387 +0000 UTC m=+819.035343848" Jan 23 18:20:24 crc kubenswrapper[4688]: I0123 18:20:24.044543 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wcxg2" podStartSLOduration=5.224814239 podStartE2EDuration="8.044511677s" podCreationTimestamp="2026-01-23 18:20:16 +0000 UTC" firstStartedPulling="2026-01-23 18:20:16.915033534 +0000 UTC m=+811.910857975" lastFinishedPulling="2026-01-23 18:20:19.734730972 +0000 UTC m=+814.730555413" observedRunningTime="2026-01-23 18:20:21.064328574 +0000 UTC m=+816.060153025" watchObservedRunningTime="2026-01-23 18:20:24.044511677 +0000 UTC m=+819.040336138" Jan 23 18:20:26 crc kubenswrapper[4688]: I0123 18:20:26.551357 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4hkd8" Jan 23 18:20:26 crc kubenswrapper[4688]: I0123 18:20:26.882784 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:26 crc kubenswrapper[4688]: I0123 18:20:26.882904 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:26 crc kubenswrapper[4688]: I0123 18:20:26.888669 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:27 crc kubenswrapper[4688]: I0123 18:20:27.041018 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cb499fd5c-m4wbz" Jan 23 18:20:27 crc kubenswrapper[4688]: I0123 18:20:27.108261 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f29lx"] Jan 23 18:20:36 crc kubenswrapper[4688]: I0123 18:20:36.965731 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:20:36 crc kubenswrapper[4688]: I0123 18:20:36.966835 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:20:37 crc kubenswrapper[4688]: I0123 18:20:37.090700 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wzgkn" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.019232 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pf5tt"] Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.025504 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.034737 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pf5tt"] Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.153852 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-catalog-content\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.154380 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcs7\" (UniqueName: \"kubernetes.io/projected/ae06cca5-8c9f-4583-8745-54232fc88b9f-kube-api-access-rxcs7\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.154454 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-utilities\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.256149 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-utilities\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.256648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-catalog-content\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.256737 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcs7\" (UniqueName: \"kubernetes.io/projected/ae06cca5-8c9f-4583-8745-54232fc88b9f-kube-api-access-rxcs7\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.256964 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-utilities\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.257265 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-catalog-content\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.282077 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcs7\" (UniqueName: \"kubernetes.io/projected/ae06cca5-8c9f-4583-8745-54232fc88b9f-kube-api-access-rxcs7\") pod \"community-operators-pf5tt\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.368495 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:43 crc kubenswrapper[4688]: I0123 18:20:43.735445 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pf5tt"] Jan 23 18:20:44 crc kubenswrapper[4688]: I0123 18:20:44.168203 4688 generic.go:334] "Generic (PLEG): container finished" podID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerID="40a52280dd1d37ee78a3cf9af2999a68a940e9bbf6c744ac4ead63aa1ed5afe0" exitCode=0 Jan 23 18:20:44 crc kubenswrapper[4688]: I0123 18:20:44.168273 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf5tt" event={"ID":"ae06cca5-8c9f-4583-8745-54232fc88b9f","Type":"ContainerDied","Data":"40a52280dd1d37ee78a3cf9af2999a68a940e9bbf6c744ac4ead63aa1ed5afe0"} Jan 23 18:20:44 crc kubenswrapper[4688]: I0123 18:20:44.168705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf5tt" event={"ID":"ae06cca5-8c9f-4583-8745-54232fc88b9f","Type":"ContainerStarted","Data":"62ffcb752ed601b9e6c9507b778b18fb10c8a7f41e72a5da1418fa0e53bc8525"} Jan 23 18:20:45 crc kubenswrapper[4688]: I0123 18:20:45.180601 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf5tt" event={"ID":"ae06cca5-8c9f-4583-8745-54232fc88b9f","Type":"ContainerStarted","Data":"6b7266bcdd0796ee715100d06d44129bcc8bec2429557343b76b8b7ef9753f4a"} Jan 23 18:20:46 crc kubenswrapper[4688]: I0123 18:20:46.193509 4688 generic.go:334] "Generic (PLEG): container finished" podID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerID="6b7266bcdd0796ee715100d06d44129bcc8bec2429557343b76b8b7ef9753f4a" exitCode=0 Jan 23 18:20:46 crc kubenswrapper[4688]: I0123 18:20:46.194409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf5tt" event={"ID":"ae06cca5-8c9f-4583-8745-54232fc88b9f","Type":"ContainerDied","Data":"6b7266bcdd0796ee715100d06d44129bcc8bec2429557343b76b8b7ef9753f4a"} Jan 23 18:20:47 crc kubenswrapper[4688]: I0123 18:20:47.208325 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf5tt" event={"ID":"ae06cca5-8c9f-4583-8745-54232fc88b9f","Type":"ContainerStarted","Data":"25d1dce3a816543bb95a0ab8d03a04175fb6f67e307ba610ee0abff0c61742d4"} Jan 23 18:20:47 crc kubenswrapper[4688]: I0123 18:20:47.249855 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pf5tt" podStartSLOduration=2.498957198 podStartE2EDuration="5.249816369s" podCreationTimestamp="2026-01-23 18:20:42 +0000 UTC" firstStartedPulling="2026-01-23 18:20:44.171955297 +0000 UTC m=+839.167779728" lastFinishedPulling="2026-01-23 18:20:46.922814448 +0000 UTC m=+841.918638899" observedRunningTime="2026-01-23 18:20:47.239386006 +0000 UTC m=+842.235210447" watchObservedRunningTime="2026-01-23 18:20:47.249816369 +0000 UTC m=+842.245640820" Jan 23 18:20:52 crc kubenswrapper[4688]: I0123 18:20:52.155430 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-f29lx" podUID="d4a321be-034e-49be-bcb8-114be9ecc457" containerName="console" containerID="cri-o://a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1" gracePeriod=15 Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.077270 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f29lx_d4a321be-034e-49be-bcb8-114be9ecc457/console/0.log" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.077780 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.176444 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-oauth-serving-cert\") pod \"d4a321be-034e-49be-bcb8-114be9ecc457\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.176538 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vcrd\" (UniqueName: \"kubernetes.io/projected/d4a321be-034e-49be-bcb8-114be9ecc457-kube-api-access-9vcrd\") pod \"d4a321be-034e-49be-bcb8-114be9ecc457\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.176617 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-serving-cert\") pod \"d4a321be-034e-49be-bcb8-114be9ecc457\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.176641 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-oauth-config\") pod \"d4a321be-034e-49be-bcb8-114be9ecc457\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.176672 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-trusted-ca-bundle\") pod \"d4a321be-034e-49be-bcb8-114be9ecc457\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.176721 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-service-ca\") pod \"d4a321be-034e-49be-bcb8-114be9ecc457\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.176741 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-console-config\") pod \"d4a321be-034e-49be-bcb8-114be9ecc457\" (UID: \"d4a321be-034e-49be-bcb8-114be9ecc457\") " Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.178404 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-service-ca" (OuterVolumeSpecName: "service-ca") pod "d4a321be-034e-49be-bcb8-114be9ecc457" (UID: "d4a321be-034e-49be-bcb8-114be9ecc457"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.178807 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d4a321be-034e-49be-bcb8-114be9ecc457" (UID: "d4a321be-034e-49be-bcb8-114be9ecc457"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.179043 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d4a321be-034e-49be-bcb8-114be9ecc457" (UID: "d4a321be-034e-49be-bcb8-114be9ecc457"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.179148 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-console-config" (OuterVolumeSpecName: "console-config") pod "d4a321be-034e-49be-bcb8-114be9ecc457" (UID: "d4a321be-034e-49be-bcb8-114be9ecc457"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.185943 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a321be-034e-49be-bcb8-114be9ecc457-kube-api-access-9vcrd" (OuterVolumeSpecName: "kube-api-access-9vcrd") pod "d4a321be-034e-49be-bcb8-114be9ecc457" (UID: "d4a321be-034e-49be-bcb8-114be9ecc457"). InnerVolumeSpecName "kube-api-access-9vcrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.186146 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d4a321be-034e-49be-bcb8-114be9ecc457" (UID: "d4a321be-034e-49be-bcb8-114be9ecc457"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.186470 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d4a321be-034e-49be-bcb8-114be9ecc457" (UID: "d4a321be-034e-49be-bcb8-114be9ecc457"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.279132 4688 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.279347 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vcrd\" (UniqueName: \"kubernetes.io/projected/d4a321be-034e-49be-bcb8-114be9ecc457-kube-api-access-9vcrd\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.279363 4688 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.279375 4688 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4a321be-034e-49be-bcb8-114be9ecc457-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.279384 4688 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.279394 4688 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.279403 4688 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4a321be-034e-49be-bcb8-114be9ecc457-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.284721 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-f29lx_d4a321be-034e-49be-bcb8-114be9ecc457/console/0.log" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.284767 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4a321be-034e-49be-bcb8-114be9ecc457" containerID="a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1" exitCode=2 Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.284804 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f29lx" event={"ID":"d4a321be-034e-49be-bcb8-114be9ecc457","Type":"ContainerDied","Data":"a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1"} Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.284840 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-f29lx" event={"ID":"d4a321be-034e-49be-bcb8-114be9ecc457","Type":"ContainerDied","Data":"6a7825cf6625e5152e08b47fd0cdeba5f910c7c2692fec4cdcc4918324d52c40"} Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.284862 4688 scope.go:117] "RemoveContainer" containerID="a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.285201 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-f29lx" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.326753 4688 scope.go:117] "RemoveContainer" containerID="a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1" Jan 23 18:20:53 crc kubenswrapper[4688]: E0123 18:20:53.329757 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1\": container with ID starting with a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1 not found: ID does not exist" containerID="a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.329813 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1"} err="failed to get container status \"a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1\": rpc error: code = NotFound desc = could not find container \"a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1\": container with ID starting with a8b2a3841104050cb5ba7efd536aa12146e7b1a7d098ce336554b6bda86f27b1 not found: ID does not exist" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.334413 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-f29lx"] Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.368266 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-f29lx"] Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.369226 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.369273 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:53 crc kubenswrapper[4688]: I0123 18:20:53.478365 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:54 crc kubenswrapper[4688]: I0123 18:20:54.338993 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:55 crc kubenswrapper[4688]: I0123 18:20:55.366676 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a321be-034e-49be-bcb8-114be9ecc457" path="/var/lib/kubelet/pods/d4a321be-034e-49be-bcb8-114be9ecc457/volumes" Jan 23 18:20:56 crc kubenswrapper[4688]: I0123 18:20:56.728325 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pf5tt"] Jan 23 18:20:56 crc kubenswrapper[4688]: I0123 18:20:56.729974 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pf5tt" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="registry-server" containerID="cri-o://25d1dce3a816543bb95a0ab8d03a04175fb6f67e307ba610ee0abff0c61742d4" gracePeriod=2 Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.314553 4688 generic.go:334] "Generic (PLEG): container finished" podID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerID="25d1dce3a816543bb95a0ab8d03a04175fb6f67e307ba610ee0abff0c61742d4" exitCode=0 Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.314616 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf5tt" event={"ID":"ae06cca5-8c9f-4583-8745-54232fc88b9f","Type":"ContainerDied","Data":"25d1dce3a816543bb95a0ab8d03a04175fb6f67e307ba610ee0abff0c61742d4"} Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.604707 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf"] Jan 23 18:20:57 crc kubenswrapper[4688]: E0123 18:20:57.605073 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a321be-034e-49be-bcb8-114be9ecc457" containerName="console" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.605089 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a321be-034e-49be-bcb8-114be9ecc457" containerName="console" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.606209 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a321be-034e-49be-bcb8-114be9ecc457" containerName="console" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.607249 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.610270 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.620102 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf"] Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.644277 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.649956 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-catalog-content\") pod \"ae06cca5-8c9f-4583-8745-54232fc88b9f\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.650096 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxcs7\" (UniqueName: \"kubernetes.io/projected/ae06cca5-8c9f-4583-8745-54232fc88b9f-kube-api-access-rxcs7\") pod \"ae06cca5-8c9f-4583-8745-54232fc88b9f\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.650128 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-utilities\") pod \"ae06cca5-8c9f-4583-8745-54232fc88b9f\" (UID: \"ae06cca5-8c9f-4583-8745-54232fc88b9f\") " Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.650964 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.651051 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8nn\" (UniqueName: \"kubernetes.io/projected/565c6f37-d514-4443-965d-f482233b748b-kube-api-access-zb8nn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.651198 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.651713 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-utilities" (OuterVolumeSpecName: "utilities") pod "ae06cca5-8c9f-4583-8745-54232fc88b9f" (UID: "ae06cca5-8c9f-4583-8745-54232fc88b9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.659836 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae06cca5-8c9f-4583-8745-54232fc88b9f-kube-api-access-rxcs7" (OuterVolumeSpecName: "kube-api-access-rxcs7") pod "ae06cca5-8c9f-4583-8745-54232fc88b9f" (UID: "ae06cca5-8c9f-4583-8745-54232fc88b9f"). InnerVolumeSpecName "kube-api-access-rxcs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.715477 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae06cca5-8c9f-4583-8745-54232fc88b9f" (UID: "ae06cca5-8c9f-4583-8745-54232fc88b9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.751898 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb8nn\" (UniqueName: \"kubernetes.io/projected/565c6f37-d514-4443-965d-f482233b748b-kube-api-access-zb8nn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.751984 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.752066 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.752124 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.752150 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxcs7\" (UniqueName: \"kubernetes.io/projected/ae06cca5-8c9f-4583-8745-54232fc88b9f-kube-api-access-rxcs7\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.752165 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae06cca5-8c9f-4583-8745-54232fc88b9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.752792 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.755280 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.777356 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb8nn\" (UniqueName: \"kubernetes.io/projected/565c6f37-d514-4443-965d-f482233b748b-kube-api-access-zb8nn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:57 crc kubenswrapper[4688]: I0123 18:20:57.956256 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.324617 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf5tt" event={"ID":"ae06cca5-8c9f-4583-8745-54232fc88b9f","Type":"ContainerDied","Data":"62ffcb752ed601b9e6c9507b778b18fb10c8a7f41e72a5da1418fa0e53bc8525"} Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.324698 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf5tt" Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.325157 4688 scope.go:117] "RemoveContainer" containerID="25d1dce3a816543bb95a0ab8d03a04175fb6f67e307ba610ee0abff0c61742d4" Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.351835 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf"] Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.352180 4688 scope.go:117] "RemoveContainer" containerID="6b7266bcdd0796ee715100d06d44129bcc8bec2429557343b76b8b7ef9753f4a" Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.387908 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pf5tt"] Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.393550 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pf5tt"] Jan 23 18:20:58 crc kubenswrapper[4688]: I0123 18:20:58.394864 4688 scope.go:117] "RemoveContainer" containerID="40a52280dd1d37ee78a3cf9af2999a68a940e9bbf6c744ac4ead63aa1ed5afe0" Jan 23 18:20:59 crc kubenswrapper[4688]: I0123 18:20:59.335837 4688 generic.go:334] "Generic (PLEG): container finished" podID="565c6f37-d514-4443-965d-f482233b748b" containerID="b5aac4e7211da71cc9df0ad9dfce5048fc6496a2d5bfbab647747ab2b6ba44f6" exitCode=0 Jan 23 18:20:59 crc kubenswrapper[4688]: I0123 18:20:59.336173 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" event={"ID":"565c6f37-d514-4443-965d-f482233b748b","Type":"ContainerDied","Data":"b5aac4e7211da71cc9df0ad9dfce5048fc6496a2d5bfbab647747ab2b6ba44f6"} Jan 23 18:20:59 crc kubenswrapper[4688]: I0123 18:20:59.336328 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" event={"ID":"565c6f37-d514-4443-965d-f482233b748b","Type":"ContainerStarted","Data":"46c6c114939de682d7cbab103eaae722d20ebe232c18f997eea1a4a17d3cfab3"} Jan 23 18:20:59 crc kubenswrapper[4688]: I0123 18:20:59.376086 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" path="/var/lib/kubelet/pods/ae06cca5-8c9f-4583-8745-54232fc88b9f/volumes" Jan 23 18:21:01 crc kubenswrapper[4688]: I0123 18:21:01.351811 4688 generic.go:334] "Generic (PLEG): container finished" podID="565c6f37-d514-4443-965d-f482233b748b" containerID="20bd0fb59d83f565b6c7bb76e597a1e278e3870e93086839bb7cf9b58e7d534d" exitCode=0 Jan 23 18:21:01 crc kubenswrapper[4688]: I0123 18:21:01.351901 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" event={"ID":"565c6f37-d514-4443-965d-f482233b748b","Type":"ContainerDied","Data":"20bd0fb59d83f565b6c7bb76e597a1e278e3870e93086839bb7cf9b58e7d534d"} Jan 23 18:21:02 crc kubenswrapper[4688]: I0123 18:21:02.373825 4688 generic.go:334] "Generic (PLEG): container finished" podID="565c6f37-d514-4443-965d-f482233b748b" containerID="b7e575102b7c61c0ec481bd55fcd789bd8458239b0814f3e23499f2ae24dc0d1" exitCode=0 Jan 23 18:21:02 crc kubenswrapper[4688]: I0123 18:21:02.373894 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" event={"ID":"565c6f37-d514-4443-965d-f482233b748b","Type":"ContainerDied","Data":"b7e575102b7c61c0ec481bd55fcd789bd8458239b0814f3e23499f2ae24dc0d1"} Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.634749 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.746788 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb8nn\" (UniqueName: \"kubernetes.io/projected/565c6f37-d514-4443-965d-f482233b748b-kube-api-access-zb8nn\") pod \"565c6f37-d514-4443-965d-f482233b748b\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.746916 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-util\") pod \"565c6f37-d514-4443-965d-f482233b748b\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.746946 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-bundle\") pod \"565c6f37-d514-4443-965d-f482233b748b\" (UID: \"565c6f37-d514-4443-965d-f482233b748b\") " Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.748360 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-bundle" (OuterVolumeSpecName: "bundle") pod "565c6f37-d514-4443-965d-f482233b748b" (UID: "565c6f37-d514-4443-965d-f482233b748b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.758480 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565c6f37-d514-4443-965d-f482233b748b-kube-api-access-zb8nn" (OuterVolumeSpecName: "kube-api-access-zb8nn") pod "565c6f37-d514-4443-965d-f482233b748b" (UID: "565c6f37-d514-4443-965d-f482233b748b"). InnerVolumeSpecName "kube-api-access-zb8nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.763873 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-util" (OuterVolumeSpecName: "util") pod "565c6f37-d514-4443-965d-f482233b748b" (UID: "565c6f37-d514-4443-965d-f482233b748b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.849259 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb8nn\" (UniqueName: \"kubernetes.io/projected/565c6f37-d514-4443-965d-f482233b748b-kube-api-access-zb8nn\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.849307 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-util\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:03 crc kubenswrapper[4688]: I0123 18:21:03.849321 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/565c6f37-d514-4443-965d-f482233b748b-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:04 crc kubenswrapper[4688]: I0123 18:21:04.388705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" event={"ID":"565c6f37-d514-4443-965d-f482233b748b","Type":"ContainerDied","Data":"46c6c114939de682d7cbab103eaae722d20ebe232c18f997eea1a4a17d3cfab3"} Jan 23 18:21:04 crc kubenswrapper[4688]: I0123 18:21:04.388772 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf" Jan 23 18:21:04 crc kubenswrapper[4688]: I0123 18:21:04.388774 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46c6c114939de682d7cbab103eaae722d20ebe232c18f997eea1a4a17d3cfab3" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.145025 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgkwg"] Jan 23 18:21:06 crc kubenswrapper[4688]: E0123 18:21:06.146067 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565c6f37-d514-4443-965d-f482233b748b" containerName="util" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.146091 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="565c6f37-d514-4443-965d-f482233b748b" containerName="util" Jan 23 18:21:06 crc kubenswrapper[4688]: E0123 18:21:06.146102 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565c6f37-d514-4443-965d-f482233b748b" containerName="pull" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.146110 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="565c6f37-d514-4443-965d-f482233b748b" containerName="pull" Jan 23 18:21:06 crc kubenswrapper[4688]: E0123 18:21:06.146147 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="extract-content" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.146156 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="extract-content" Jan 23 18:21:06 crc kubenswrapper[4688]: E0123 18:21:06.146175 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565c6f37-d514-4443-965d-f482233b748b" containerName="extract" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.146208 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="565c6f37-d514-4443-965d-f482233b748b" containerName="extract" Jan 23 18:21:06 crc kubenswrapper[4688]: E0123 18:21:06.146222 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="registry-server" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.146229 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="registry-server" Jan 23 18:21:06 crc kubenswrapper[4688]: E0123 18:21:06.146241 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="extract-utilities" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.146250 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="extract-utilities" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.148323 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae06cca5-8c9f-4583-8745-54232fc88b9f" containerName="registry-server" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.148428 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="565c6f37-d514-4443-965d-f482233b748b" containerName="extract" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.150156 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.152731 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgkwg"] Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.285160 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-utilities\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.285301 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp5jp\" (UniqueName: \"kubernetes.io/projected/46c566c9-13d2-441f-a52c-946c9ea8f649-kube-api-access-hp5jp\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.285377 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-catalog-content\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.386666 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-utilities\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.386751 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp5jp\" (UniqueName: \"kubernetes.io/projected/46c566c9-13d2-441f-a52c-946c9ea8f649-kube-api-access-hp5jp\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.386824 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-catalog-content\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.387507 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-catalog-content\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.387513 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-utilities\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.417695 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp5jp\" (UniqueName: \"kubernetes.io/projected/46c566c9-13d2-441f-a52c-946c9ea8f649-kube-api-access-hp5jp\") pod \"redhat-marketplace-zgkwg\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.501985 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.965167 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.965772 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.965847 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.966886 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c8c19ed1c7be125088def7ce3f0a64b978aa806db3742b6ac615e8c4bfd5bae"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.966998 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://8c8c19ed1c7be125088def7ce3f0a64b978aa806db3742b6ac615e8c4bfd5bae" gracePeriod=600 Jan 23 18:21:06 crc kubenswrapper[4688]: I0123 18:21:06.990077 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgkwg"] Jan 23 18:21:07 crc kubenswrapper[4688]: I0123 18:21:07.412075 4688 generic.go:334] "Generic (PLEG): container finished" podID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerID="ebf7f943d145d22252f83b0c09c0cfc4096921d46a07d4dc615a42fc08f5e69f" exitCode=0 Jan 23 18:21:07 crc kubenswrapper[4688]: I0123 18:21:07.412327 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgkwg" event={"ID":"46c566c9-13d2-441f-a52c-946c9ea8f649","Type":"ContainerDied","Data":"ebf7f943d145d22252f83b0c09c0cfc4096921d46a07d4dc615a42fc08f5e69f"} Jan 23 18:21:07 crc kubenswrapper[4688]: I0123 18:21:07.412705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgkwg" event={"ID":"46c566c9-13d2-441f-a52c-946c9ea8f649","Type":"ContainerStarted","Data":"076d2614e1b9d12f15bedebeb0249b0579df7bc98aed8fdc10c6305217854e42"} Jan 23 18:21:07 crc kubenswrapper[4688]: I0123 18:21:07.418451 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="8c8c19ed1c7be125088def7ce3f0a64b978aa806db3742b6ac615e8c4bfd5bae" exitCode=0 Jan 23 18:21:07 crc kubenswrapper[4688]: I0123 18:21:07.418509 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"8c8c19ed1c7be125088def7ce3f0a64b978aa806db3742b6ac615e8c4bfd5bae"} Jan 23 18:21:07 crc kubenswrapper[4688]: I0123 18:21:07.418552 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"ce2ee85d69f22a706875c0452ba1efb42e44916bb5588111fe1426c3ed55e5f2"} Jan 23 18:21:07 crc kubenswrapper[4688]: I0123 18:21:07.418576 4688 scope.go:117] "RemoveContainer" containerID="0cadaf13fa81ded2e3a1c3d78a3ae5a1fa4294316faa30d6a26a5553349ddf99" Jan 23 18:21:08 crc kubenswrapper[4688]: I0123 18:21:08.428581 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgkwg" event={"ID":"46c566c9-13d2-441f-a52c-946c9ea8f649","Type":"ContainerStarted","Data":"89670515261dbb678c5938d274f0597a9efaf13c54cbdb6485c040ff9d1553b3"} Jan 23 18:21:09 crc kubenswrapper[4688]: I0123 18:21:09.442694 4688 generic.go:334] "Generic (PLEG): container finished" podID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerID="89670515261dbb678c5938d274f0597a9efaf13c54cbdb6485c040ff9d1553b3" exitCode=0 Jan 23 18:21:09 crc kubenswrapper[4688]: I0123 18:21:09.442771 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgkwg" event={"ID":"46c566c9-13d2-441f-a52c-946c9ea8f649","Type":"ContainerDied","Data":"89670515261dbb678c5938d274f0597a9efaf13c54cbdb6485c040ff9d1553b3"} Jan 23 18:21:10 crc kubenswrapper[4688]: I0123 18:21:10.452137 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgkwg" event={"ID":"46c566c9-13d2-441f-a52c-946c9ea8f649","Type":"ContainerStarted","Data":"e2ca5aa6e540ed1ba55d2bb836858db53288836a0fe9c538ed2da9ab39faa35b"} Jan 23 18:21:10 crc kubenswrapper[4688]: I0123 18:21:10.476947 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgkwg" podStartSLOduration=1.868925518 podStartE2EDuration="4.476920391s" podCreationTimestamp="2026-01-23 18:21:06 +0000 UTC" firstStartedPulling="2026-01-23 18:21:07.415399438 +0000 UTC m=+862.411223889" lastFinishedPulling="2026-01-23 18:21:10.023394321 +0000 UTC m=+865.019218762" observedRunningTime="2026-01-23 18:21:10.47633038 +0000 UTC m=+865.472154831" watchObservedRunningTime="2026-01-23 18:21:10.476920391 +0000 UTC m=+865.472744822" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.588283 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-844488998d-d4vzw"] Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.590074 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.593080 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.593718 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.594379 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.594436 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8ltfb" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.594552 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.609221 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-apiservice-cert\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.609291 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-webhook-cert\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.609349 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h6v6\" (UniqueName: \"kubernetes.io/projected/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-kube-api-access-2h6v6\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.618068 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-844488998d-d4vzw"] Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.710527 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-apiservice-cert\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.710610 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-webhook-cert\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.710686 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h6v6\" (UniqueName: \"kubernetes.io/projected/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-kube-api-access-2h6v6\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.722289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-webhook-cert\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.730222 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-apiservice-cert\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.735156 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h6v6\" (UniqueName: \"kubernetes.io/projected/1e8a4a5c-bbf0-404d-aada-461ca3e42d72-kube-api-access-2h6v6\") pod \"metallb-operator-controller-manager-844488998d-d4vzw\" (UID: \"1e8a4a5c-bbf0-404d-aada-461ca3e42d72\") " pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.912310 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.979483 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6979454977-pw2fp"] Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.980608 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.983446 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wspsr" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.984223 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 18:21:11 crc kubenswrapper[4688]: I0123 18:21:11.984430 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.007214 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6979454977-pw2fp"] Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.126329 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-apiservice-cert\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.127041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-webhook-cert\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.127075 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhgn\" (UniqueName: \"kubernetes.io/projected/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-kube-api-access-prhgn\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.229248 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-webhook-cert\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.229329 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhgn\" (UniqueName: \"kubernetes.io/projected/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-kube-api-access-prhgn\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.229401 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-apiservice-cert\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.238716 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-apiservice-cert\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.245347 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-webhook-cert\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.259794 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhgn\" (UniqueName: \"kubernetes.io/projected/61d2f464-2eea-403d-a6e7-3a5bb3a067a5-kube-api-access-prhgn\") pod \"metallb-operator-webhook-server-6979454977-pw2fp\" (UID: \"61d2f464-2eea-403d-a6e7-3a5bb3a067a5\") " pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.300287 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.365166 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-844488998d-d4vzw"] Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.468305 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" event={"ID":"1e8a4a5c-bbf0-404d-aada-461ca3e42d72","Type":"ContainerStarted","Data":"9dabaf8ec9d9ba6b7f31b587269a4c9ddfed8036d8f1115fb5b68c25020a2bef"} Jan 23 18:21:12 crc kubenswrapper[4688]: I0123 18:21:12.678809 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6979454977-pw2fp"] Jan 23 18:21:12 crc kubenswrapper[4688]: W0123 18:21:12.684518 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61d2f464_2eea_403d_a6e7_3a5bb3a067a5.slice/crio-7aded55b3d95e552432b766d702e3abf5ed51defe52e7c33394706b492fe2e09 WatchSource:0}: Error finding container 7aded55b3d95e552432b766d702e3abf5ed51defe52e7c33394706b492fe2e09: Status 404 returned error can't find the container with id 7aded55b3d95e552432b766d702e3abf5ed51defe52e7c33394706b492fe2e09 Jan 23 18:21:13 crc kubenswrapper[4688]: I0123 18:21:13.476371 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" event={"ID":"61d2f464-2eea-403d-a6e7-3a5bb3a067a5","Type":"ContainerStarted","Data":"7aded55b3d95e552432b766d702e3abf5ed51defe52e7c33394706b492fe2e09"} Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.768849 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tpjhl"] Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.774947 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.797083 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tpjhl"] Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.888599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l4f4\" (UniqueName: \"kubernetes.io/projected/28312144-f84b-4ee2-ab84-b78171d44fb1-kube-api-access-7l4f4\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.888690 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-utilities\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.888715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-catalog-content\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.990088 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l4f4\" (UniqueName: \"kubernetes.io/projected/28312144-f84b-4ee2-ab84-b78171d44fb1-kube-api-access-7l4f4\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.990167 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-utilities\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.990203 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-catalog-content\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.990737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-catalog-content\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:15 crc kubenswrapper[4688]: I0123 18:21:15.991005 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-utilities\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:16 crc kubenswrapper[4688]: I0123 18:21:16.024658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l4f4\" (UniqueName: \"kubernetes.io/projected/28312144-f84b-4ee2-ab84-b78171d44fb1-kube-api-access-7l4f4\") pod \"certified-operators-tpjhl\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:16 crc kubenswrapper[4688]: I0123 18:21:16.101110 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:16 crc kubenswrapper[4688]: I0123 18:21:16.503216 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:16 crc kubenswrapper[4688]: I0123 18:21:16.505818 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:16 crc kubenswrapper[4688]: I0123 18:21:16.591487 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:16 crc kubenswrapper[4688]: I0123 18:21:16.714459 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tpjhl"] Jan 23 18:21:17 crc kubenswrapper[4688]: I0123 18:21:17.516027 4688 generic.go:334] "Generic (PLEG): container finished" podID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerID="aa71dbcc22651767f24fcb4fd188626351998bfad0956592eeb33a9430c05420" exitCode=0 Jan 23 18:21:17 crc kubenswrapper[4688]: I0123 18:21:17.516227 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpjhl" event={"ID":"28312144-f84b-4ee2-ab84-b78171d44fb1","Type":"ContainerDied","Data":"aa71dbcc22651767f24fcb4fd188626351998bfad0956592eeb33a9430c05420"} Jan 23 18:21:17 crc kubenswrapper[4688]: I0123 18:21:17.518029 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpjhl" event={"ID":"28312144-f84b-4ee2-ab84-b78171d44fb1","Type":"ContainerStarted","Data":"4987d9e6e7f81621d5f8db2d83715794ac0041a46ae679e08420243516d5f2fc"} Jan 23 18:21:17 crc kubenswrapper[4688]: I0123 18:21:17.605208 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:20 crc kubenswrapper[4688]: I0123 18:21:20.932121 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgkwg"] Jan 23 18:21:20 crc kubenswrapper[4688]: I0123 18:21:20.933355 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgkwg" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="registry-server" containerID="cri-o://e2ca5aa6e540ed1ba55d2bb836858db53288836a0fe9c538ed2da9ab39faa35b" gracePeriod=2 Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.560837 4688 generic.go:334] "Generic (PLEG): container finished" podID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerID="e2ca5aa6e540ed1ba55d2bb836858db53288836a0fe9c538ed2da9ab39faa35b" exitCode=0 Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.560900 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgkwg" event={"ID":"46c566c9-13d2-441f-a52c-946c9ea8f649","Type":"ContainerDied","Data":"e2ca5aa6e540ed1ba55d2bb836858db53288836a0fe9c538ed2da9ab39faa35b"} Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.774323 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.833513 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-catalog-content\") pod \"46c566c9-13d2-441f-a52c-946c9ea8f649\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.833724 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-utilities\") pod \"46c566c9-13d2-441f-a52c-946c9ea8f649\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.834637 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-utilities" (OuterVolumeSpecName: "utilities") pod "46c566c9-13d2-441f-a52c-946c9ea8f649" (UID: "46c566c9-13d2-441f-a52c-946c9ea8f649"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.834721 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp5jp\" (UniqueName: \"kubernetes.io/projected/46c566c9-13d2-441f-a52c-946c9ea8f649-kube-api-access-hp5jp\") pod \"46c566c9-13d2-441f-a52c-946c9ea8f649\" (UID: \"46c566c9-13d2-441f-a52c-946c9ea8f649\") " Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.836273 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.854485 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c566c9-13d2-441f-a52c-946c9ea8f649-kube-api-access-hp5jp" (OuterVolumeSpecName: "kube-api-access-hp5jp") pod "46c566c9-13d2-441f-a52c-946c9ea8f649" (UID: "46c566c9-13d2-441f-a52c-946c9ea8f649"). InnerVolumeSpecName "kube-api-access-hp5jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.864049 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46c566c9-13d2-441f-a52c-946c9ea8f649" (UID: "46c566c9-13d2-441f-a52c-946c9ea8f649"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.938076 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp5jp\" (UniqueName: \"kubernetes.io/projected/46c566c9-13d2-441f-a52c-946c9ea8f649-kube-api-access-hp5jp\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:21 crc kubenswrapper[4688]: I0123 18:21:21.938134 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c566c9-13d2-441f-a52c-946c9ea8f649-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.570528 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" event={"ID":"1e8a4a5c-bbf0-404d-aada-461ca3e42d72","Type":"ContainerStarted","Data":"69fd6d949102d6751983065e954945cf67d7e7cf7f2517d9386bce0be715d7cf"} Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.571367 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.573902 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgkwg" event={"ID":"46c566c9-13d2-441f-a52c-946c9ea8f649","Type":"ContainerDied","Data":"076d2614e1b9d12f15bedebeb0249b0579df7bc98aed8fdc10c6305217854e42"} Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.573979 4688 scope.go:117] "RemoveContainer" containerID="e2ca5aa6e540ed1ba55d2bb836858db53288836a0fe9c538ed2da9ab39faa35b" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.574153 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgkwg" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.590829 4688 generic.go:334] "Generic (PLEG): container finished" podID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerID="2987243a997bf62d0d00af30844f24e1b00b5c229404ced9c1815335a562fe4f" exitCode=0 Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.590970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpjhl" event={"ID":"28312144-f84b-4ee2-ab84-b78171d44fb1","Type":"ContainerDied","Data":"2987243a997bf62d0d00af30844f24e1b00b5c229404ced9c1815335a562fe4f"} Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.598668 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" event={"ID":"61d2f464-2eea-403d-a6e7-3a5bb3a067a5","Type":"ContainerStarted","Data":"dbd088b6931dc51fe08c68eefa1b3ff881c03964c40742ecc44c6aba17545bd5"} Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.599072 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.616291 4688 scope.go:117] "RemoveContainer" containerID="89670515261dbb678c5938d274f0597a9efaf13c54cbdb6485c040ff9d1553b3" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.644314 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" podStartSLOduration=2.805537245 podStartE2EDuration="11.644286729s" podCreationTimestamp="2026-01-23 18:21:11 +0000 UTC" firstStartedPulling="2026-01-23 18:21:12.688404093 +0000 UTC m=+867.684228534" lastFinishedPulling="2026-01-23 18:21:21.527153577 +0000 UTC m=+876.522978018" observedRunningTime="2026-01-23 18:21:22.642459212 +0000 UTC m=+877.638283653" watchObservedRunningTime="2026-01-23 18:21:22.644286729 +0000 UTC m=+877.640111170" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.652118 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" podStartSLOduration=2.552865931 podStartE2EDuration="11.651834786s" podCreationTimestamp="2026-01-23 18:21:11 +0000 UTC" firstStartedPulling="2026-01-23 18:21:12.397740447 +0000 UTC m=+867.393564888" lastFinishedPulling="2026-01-23 18:21:21.496709302 +0000 UTC m=+876.492533743" observedRunningTime="2026-01-23 18:21:22.61426669 +0000 UTC m=+877.610091151" watchObservedRunningTime="2026-01-23 18:21:22.651834786 +0000 UTC m=+877.647659227" Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.667638 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgkwg"] Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.673774 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgkwg"] Jan 23 18:21:22 crc kubenswrapper[4688]: I0123 18:21:22.721810 4688 scope.go:117] "RemoveContainer" containerID="ebf7f943d145d22252f83b0c09c0cfc4096921d46a07d4dc615a42fc08f5e69f" Jan 23 18:21:23 crc kubenswrapper[4688]: I0123 18:21:23.366428 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" path="/var/lib/kubelet/pods/46c566c9-13d2-441f-a52c-946c9ea8f649/volumes" Jan 23 18:21:23 crc kubenswrapper[4688]: I0123 18:21:23.609209 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpjhl" event={"ID":"28312144-f84b-4ee2-ab84-b78171d44fb1","Type":"ContainerStarted","Data":"693df3f4811761f1c8e2737a8e70db353228853f7a8906f19094d640b622e9a7"} Jan 23 18:21:23 crc kubenswrapper[4688]: I0123 18:21:23.641917 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tpjhl" podStartSLOduration=3.080573142 podStartE2EDuration="8.641883955s" podCreationTimestamp="2026-01-23 18:21:15 +0000 UTC" firstStartedPulling="2026-01-23 18:21:17.520535121 +0000 UTC m=+872.516359562" lastFinishedPulling="2026-01-23 18:21:23.081845934 +0000 UTC m=+878.077670375" observedRunningTime="2026-01-23 18:21:23.639932043 +0000 UTC m=+878.635756504" watchObservedRunningTime="2026-01-23 18:21:23.641883955 +0000 UTC m=+878.637708396" Jan 23 18:21:26 crc kubenswrapper[4688]: I0123 18:21:26.101314 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:26 crc kubenswrapper[4688]: I0123 18:21:26.101844 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:26 crc kubenswrapper[4688]: I0123 18:21:26.177044 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:32 crc kubenswrapper[4688]: I0123 18:21:32.309037 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6979454977-pw2fp" Jan 23 18:21:36 crc kubenswrapper[4688]: I0123 18:21:36.149272 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:39 crc kubenswrapper[4688]: I0123 18:21:39.527956 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tpjhl"] Jan 23 18:21:39 crc kubenswrapper[4688]: I0123 18:21:39.529292 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tpjhl" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="registry-server" containerID="cri-o://693df3f4811761f1c8e2737a8e70db353228853f7a8906f19094d640b622e9a7" gracePeriod=2 Jan 23 18:21:39 crc kubenswrapper[4688]: I0123 18:21:39.724702 4688 generic.go:334] "Generic (PLEG): container finished" podID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerID="693df3f4811761f1c8e2737a8e70db353228853f7a8906f19094d640b622e9a7" exitCode=0 Jan 23 18:21:39 crc kubenswrapper[4688]: I0123 18:21:39.724764 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpjhl" event={"ID":"28312144-f84b-4ee2-ab84-b78171d44fb1","Type":"ContainerDied","Data":"693df3f4811761f1c8e2737a8e70db353228853f7a8906f19094d640b622e9a7"} Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.495318 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.559408 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-utilities\") pod \"28312144-f84b-4ee2-ab84-b78171d44fb1\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.559480 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l4f4\" (UniqueName: \"kubernetes.io/projected/28312144-f84b-4ee2-ab84-b78171d44fb1-kube-api-access-7l4f4\") pod \"28312144-f84b-4ee2-ab84-b78171d44fb1\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.559525 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-catalog-content\") pod \"28312144-f84b-4ee2-ab84-b78171d44fb1\" (UID: \"28312144-f84b-4ee2-ab84-b78171d44fb1\") " Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.560488 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-utilities" (OuterVolumeSpecName: "utilities") pod "28312144-f84b-4ee2-ab84-b78171d44fb1" (UID: "28312144-f84b-4ee2-ab84-b78171d44fb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.566424 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28312144-f84b-4ee2-ab84-b78171d44fb1-kube-api-access-7l4f4" (OuterVolumeSpecName: "kube-api-access-7l4f4") pod "28312144-f84b-4ee2-ab84-b78171d44fb1" (UID: "28312144-f84b-4ee2-ab84-b78171d44fb1"). InnerVolumeSpecName "kube-api-access-7l4f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.611953 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28312144-f84b-4ee2-ab84-b78171d44fb1" (UID: "28312144-f84b-4ee2-ab84-b78171d44fb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.660907 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.660965 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l4f4\" (UniqueName: \"kubernetes.io/projected/28312144-f84b-4ee2-ab84-b78171d44fb1-kube-api-access-7l4f4\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.660980 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28312144-f84b-4ee2-ab84-b78171d44fb1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.735271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tpjhl" event={"ID":"28312144-f84b-4ee2-ab84-b78171d44fb1","Type":"ContainerDied","Data":"4987d9e6e7f81621d5f8db2d83715794ac0041a46ae679e08420243516d5f2fc"} Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.735352 4688 scope.go:117] "RemoveContainer" containerID="693df3f4811761f1c8e2737a8e70db353228853f7a8906f19094d640b622e9a7" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.735357 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tpjhl" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.757281 4688 scope.go:117] "RemoveContainer" containerID="2987243a997bf62d0d00af30844f24e1b00b5c229404ced9c1815335a562fe4f" Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.775677 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tpjhl"] Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.781861 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tpjhl"] Jan 23 18:21:40 crc kubenswrapper[4688]: I0123 18:21:40.783502 4688 scope.go:117] "RemoveContainer" containerID="aa71dbcc22651767f24fcb4fd188626351998bfad0956592eeb33a9430c05420" Jan 23 18:21:41 crc kubenswrapper[4688]: I0123 18:21:41.366823 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" path="/var/lib/kubelet/pods/28312144-f84b-4ee2-ab84-b78171d44fb1/volumes" Jan 23 18:21:51 crc kubenswrapper[4688]: I0123 18:21:51.918211 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-844488998d-d4vzw" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.314655 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hmn2j"] Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.315629 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="extract-content" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315651 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="extract-content" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.315667 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="extract-content" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315674 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="extract-content" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.315692 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="extract-utilities" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315701 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="extract-utilities" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.315718 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="registry-server" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315725 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="registry-server" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.315742 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="extract-utilities" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315784 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="extract-utilities" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.315801 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="registry-server" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315812 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="registry-server" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315971 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c566c9-13d2-441f-a52c-946c9ea8f649" containerName="registry-server" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.315992 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="28312144-f84b-4ee2-ab84-b78171d44fb1" containerName="registry-server" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.319559 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.325625 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf"] Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.326856 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.328703 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.328726 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.329008 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.330429 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zr7lm" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.368392 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf"] Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.390150 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62s4\" (UniqueName: \"kubernetes.io/projected/183de16f-fe88-4b85-9c1c-980569d0a89d-kube-api-access-x62s4\") pod \"frr-k8s-webhook-server-7df86c4f6c-8kldf\" (UID: \"183de16f-fe88-4b85-9c1c-980569d0a89d\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.390402 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/183de16f-fe88-4b85-9c1c-980569d0a89d-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8kldf\" (UID: \"183de16f-fe88-4b85-9c1c-980569d0a89d\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.457709 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zq5np"] Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.459298 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.461926 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.463381 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-22zfs" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.465016 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.466306 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.478774 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-89xj6"] Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.480367 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.482894 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/183de16f-fe88-4b85-9c1c-980569d0a89d-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8kldf\" (UID: \"183de16f-fe88-4b85-9c1c-980569d0a89d\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492664 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-sockets\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492722 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b227eb2-6da0-43af-a365-a532ff4e4a86-metrics-certs\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492751 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-metrics\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492792 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-552f7\" (UniqueName: \"kubernetes.io/projected/0b227eb2-6da0-43af-a365-a532ff4e4a86-kube-api-access-552f7\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492824 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-startup\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492853 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x62s4\" (UniqueName: \"kubernetes.io/projected/183de16f-fe88-4b85-9c1c-980569d0a89d-kube-api-access-x62s4\") pod \"frr-k8s-webhook-server-7df86c4f6c-8kldf\" (UID: \"183de16f-fe88-4b85-9c1c-980569d0a89d\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.492952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-conf\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.493057 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-reloader\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.518023 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-89xj6"] Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.523137 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/183de16f-fe88-4b85-9c1c-980569d0a89d-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8kldf\" (UID: \"183de16f-fe88-4b85-9c1c-980569d0a89d\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.548488 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x62s4\" (UniqueName: \"kubernetes.io/projected/183de16f-fe88-4b85-9c1c-980569d0a89d-kube-api-access-x62s4\") pod \"frr-k8s-webhook-server-7df86c4f6c-8kldf\" (UID: \"183de16f-fe88-4b85-9c1c-980569d0a89d\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.595530 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-conf\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.595633 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwr8v\" (UniqueName: \"kubernetes.io/projected/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-kube-api-access-nwr8v\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.595687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.595958 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-reloader\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596055 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5950921c-c4d2-44ac-8fb9-853d22c0f04a-metallb-excludel2\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596135 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-cert\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596256 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-conf\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596274 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-metrics-certs\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596303 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-sockets\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596367 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b227eb2-6da0-43af-a365-a532ff4e4a86-metrics-certs\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596401 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-metrics\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596454 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkq66\" (UniqueName: \"kubernetes.io/projected/5950921c-c4d2-44ac-8fb9-853d22c0f04a-kube-api-access-pkq66\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596511 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-552f7\" (UniqueName: \"kubernetes.io/projected/0b227eb2-6da0-43af-a365-a532ff4e4a86-kube-api-access-552f7\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596557 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-metrics-certs\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596575 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-reloader\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596591 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-startup\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.596809 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-metrics\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.597151 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-sockets\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.597995 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0b227eb2-6da0-43af-a365-a532ff4e4a86-frr-startup\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.601441 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b227eb2-6da0-43af-a365-a532ff4e4a86-metrics-certs\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.618068 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-552f7\" (UniqueName: \"kubernetes.io/projected/0b227eb2-6da0-43af-a365-a532ff4e4a86-kube-api-access-552f7\") pod \"frr-k8s-hmn2j\" (UID: \"0b227eb2-6da0-43af-a365-a532ff4e4a86\") " pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.644357 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.659628 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.698668 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-metrics-certs\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.698764 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkq66\" (UniqueName: \"kubernetes.io/projected/5950921c-c4d2-44ac-8fb9-853d22c0f04a-kube-api-access-pkq66\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.698809 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-metrics-certs\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.698874 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwr8v\" (UniqueName: \"kubernetes.io/projected/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-kube-api-access-nwr8v\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.698911 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.698922 4688 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.699026 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-metrics-certs podName:f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc nodeName:}" failed. No retries permitted until 2026-01-23 18:21:54.199000449 +0000 UTC m=+909.194824890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-metrics-certs") pod "controller-6968d8fdc4-89xj6" (UID: "f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc") : secret "controller-certs-secret" not found Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.698940 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5950921c-c4d2-44ac-8fb9-853d22c0f04a-metallb-excludel2\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.699458 4688 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.699519 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-cert\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: E0123 18:21:53.699559 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist podName:5950921c-c4d2-44ac-8fb9-853d22c0f04a nodeName:}" failed. No retries permitted until 2026-01-23 18:21:54.199538224 +0000 UTC m=+909.195362665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist") pod "speaker-zq5np" (UID: "5950921c-c4d2-44ac-8fb9-853d22c0f04a") : secret "metallb-memberlist" not found Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.701128 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5950921c-c4d2-44ac-8fb9-853d22c0f04a-metallb-excludel2\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.703172 4688 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.714518 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-cert\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.716645 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-metrics-certs\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.721777 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkq66\" (UniqueName: \"kubernetes.io/projected/5950921c-c4d2-44ac-8fb9-853d22c0f04a-kube-api-access-pkq66\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:53 crc kubenswrapper[4688]: I0123 18:21:53.722728 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwr8v\" (UniqueName: \"kubernetes.io/projected/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-kube-api-access-nwr8v\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.203000 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf"] Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.209960 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-metrics-certs\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.210107 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:54 crc kubenswrapper[4688]: E0123 18:21:54.210294 4688 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 18:21:54 crc kubenswrapper[4688]: E0123 18:21:54.210365 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist podName:5950921c-c4d2-44ac-8fb9-853d22c0f04a nodeName:}" failed. No retries permitted until 2026-01-23 18:21:55.210340973 +0000 UTC m=+910.206165414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist") pod "speaker-zq5np" (UID: "5950921c-c4d2-44ac-8fb9-853d22c0f04a") : secret "metallb-memberlist" not found Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.216109 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc-metrics-certs\") pod \"controller-6968d8fdc4-89xj6\" (UID: \"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc\") " pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.401893 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.644834 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-89xj6"] Jan 23 18:21:54 crc kubenswrapper[4688]: W0123 18:21:54.646223 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a8c0fb_4afd_4594_85ef_db4cea0ce3bc.slice/crio-df1fe3a85e0ae4d0f5d187a12d85469dd2df823a65f3b7118d6a307c318cecd7 WatchSource:0}: Error finding container df1fe3a85e0ae4d0f5d187a12d85469dd2df823a65f3b7118d6a307c318cecd7: Status 404 returned error can't find the container with id df1fe3a85e0ae4d0f5d187a12d85469dd2df823a65f3b7118d6a307c318cecd7 Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.857816 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"683a7a531dd768abe775b28d17d2ee93c0f0b72eb4a33848c31f9092d7003fef"} Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.858760 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" event={"ID":"183de16f-fe88-4b85-9c1c-980569d0a89d","Type":"ContainerStarted","Data":"276c0376df5d979ab99281b05ad641793d92762c220c103907d253523341293d"} Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.861292 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-89xj6" event={"ID":"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc","Type":"ContainerStarted","Data":"3b203c45f638ed02bcee3df975b8631289be8ee02c0eddd602be5197bda6cc27"} Jan 23 18:21:54 crc kubenswrapper[4688]: I0123 18:21:54.861371 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-89xj6" event={"ID":"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc","Type":"ContainerStarted","Data":"df1fe3a85e0ae4d0f5d187a12d85469dd2df823a65f3b7118d6a307c318cecd7"} Jan 23 18:21:55 crc kubenswrapper[4688]: I0123 18:21:55.228853 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:55 crc kubenswrapper[4688]: I0123 18:21:55.236316 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5950921c-c4d2-44ac-8fb9-853d22c0f04a-memberlist\") pod \"speaker-zq5np\" (UID: \"5950921c-c4d2-44ac-8fb9-853d22c0f04a\") " pod="metallb-system/speaker-zq5np" Jan 23 18:21:55 crc kubenswrapper[4688]: I0123 18:21:55.278932 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zq5np" Jan 23 18:21:55 crc kubenswrapper[4688]: W0123 18:21:55.315830 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5950921c_c4d2_44ac_8fb9_853d22c0f04a.slice/crio-87b5594cce8f66dc5d03c6fa5f4c624dc717def41d798a94321f229307c91a4d WatchSource:0}: Error finding container 87b5594cce8f66dc5d03c6fa5f4c624dc717def41d798a94321f229307c91a4d: Status 404 returned error can't find the container with id 87b5594cce8f66dc5d03c6fa5f4c624dc717def41d798a94321f229307c91a4d Jan 23 18:21:55 crc kubenswrapper[4688]: I0123 18:21:55.872284 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-89xj6" event={"ID":"f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc","Type":"ContainerStarted","Data":"c45165a8068824791202664e17f1ec681828d4ce70337399f35ba7d1729f12ed"} Jan 23 18:21:55 crc kubenswrapper[4688]: I0123 18:21:55.872870 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:21:55 crc kubenswrapper[4688]: I0123 18:21:55.874223 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zq5np" event={"ID":"5950921c-c4d2-44ac-8fb9-853d22c0f04a","Type":"ContainerStarted","Data":"87b5594cce8f66dc5d03c6fa5f4c624dc717def41d798a94321f229307c91a4d"} Jan 23 18:21:55 crc kubenswrapper[4688]: I0123 18:21:55.890232 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-89xj6" podStartSLOduration=2.890206528 podStartE2EDuration="2.890206528s" podCreationTimestamp="2026-01-23 18:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:55.889491828 +0000 UTC m=+910.885316279" watchObservedRunningTime="2026-01-23 18:21:55.890206528 +0000 UTC m=+910.886030969" Jan 23 18:21:56 crc kubenswrapper[4688]: I0123 18:21:56.896615 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zq5np" event={"ID":"5950921c-c4d2-44ac-8fb9-853d22c0f04a","Type":"ContainerStarted","Data":"e0b0d37a5edca4d35bdd22c1871cf8604e9855e355c46d693e2112a7faadb9cf"} Jan 23 18:21:56 crc kubenswrapper[4688]: I0123 18:21:56.896728 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zq5np" event={"ID":"5950921c-c4d2-44ac-8fb9-853d22c0f04a","Type":"ContainerStarted","Data":"ee397e4d6e23218919da81b5aeae503d50871894e5666e897d6b542ea6f0ee1f"} Jan 23 18:21:56 crc kubenswrapper[4688]: I0123 18:21:56.896868 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zq5np" Jan 23 18:22:04 crc kubenswrapper[4688]: I0123 18:22:04.414898 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-89xj6" Jan 23 18:22:04 crc kubenswrapper[4688]: I0123 18:22:04.438087 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zq5np" podStartSLOduration=11.43806155 podStartE2EDuration="11.43806155s" podCreationTimestamp="2026-01-23 18:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:21:56.943653007 +0000 UTC m=+911.939477448" watchObservedRunningTime="2026-01-23 18:22:04.43806155 +0000 UTC m=+919.433885981" Jan 23 18:22:04 crc kubenswrapper[4688]: I0123 18:22:04.984470 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"284869f9c9617aadc17d43549273fb4dca9710a8156d3ec5f8519bd38fdaed69"} Jan 23 18:22:04 crc kubenswrapper[4688]: I0123 18:22:04.990895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" event={"ID":"183de16f-fe88-4b85-9c1c-980569d0a89d","Type":"ContainerStarted","Data":"d5c1059cc54c49bbfc60568989f303a40e4b8fe435c645795ac226c75cfec184"} Jan 23 18:22:04 crc kubenswrapper[4688]: I0123 18:22:04.992158 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:22:05 crc kubenswrapper[4688]: I0123 18:22:05.285383 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zq5np" Jan 23 18:22:05 crc kubenswrapper[4688]: I0123 18:22:05.313519 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" podStartSLOduration=1.731653528 podStartE2EDuration="12.31348934s" podCreationTimestamp="2026-01-23 18:21:53 +0000 UTC" firstStartedPulling="2026-01-23 18:21:54.205746652 +0000 UTC m=+909.201571093" lastFinishedPulling="2026-01-23 18:22:04.787582464 +0000 UTC m=+919.783406905" observedRunningTime="2026-01-23 18:22:05.032981684 +0000 UTC m=+920.028806125" watchObservedRunningTime="2026-01-23 18:22:05.31348934 +0000 UTC m=+920.309313781" Jan 23 18:22:06 crc kubenswrapper[4688]: I0123 18:22:06.002313 4688 generic.go:334] "Generic (PLEG): container finished" podID="0b227eb2-6da0-43af-a365-a532ff4e4a86" containerID="284869f9c9617aadc17d43549273fb4dca9710a8156d3ec5f8519bd38fdaed69" exitCode=0 Jan 23 18:22:06 crc kubenswrapper[4688]: I0123 18:22:06.002408 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerDied","Data":"284869f9c9617aadc17d43549273fb4dca9710a8156d3ec5f8519bd38fdaed69"} Jan 23 18:22:07 crc kubenswrapper[4688]: I0123 18:22:07.013678 4688 generic.go:334] "Generic (PLEG): container finished" podID="0b227eb2-6da0-43af-a365-a532ff4e4a86" containerID="c654d301c42f8aeff652df35217cd9090b89878ac884a5a5690aac24408a512b" exitCode=0 Jan 23 18:22:07 crc kubenswrapper[4688]: I0123 18:22:07.013798 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerDied","Data":"c654d301c42f8aeff652df35217cd9090b89878ac884a5a5690aac24408a512b"} Jan 23 18:22:08 crc kubenswrapper[4688]: I0123 18:22:08.026113 4688 generic.go:334] "Generic (PLEG): container finished" podID="0b227eb2-6da0-43af-a365-a532ff4e4a86" containerID="38965ae98dba857198bdd181c9bc11157af2c522dece980163da892edcd6e52a" exitCode=0 Jan 23 18:22:08 crc kubenswrapper[4688]: I0123 18:22:08.026204 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerDied","Data":"38965ae98dba857198bdd181c9bc11157af2c522dece980163da892edcd6e52a"} Jan 23 18:22:09 crc kubenswrapper[4688]: I0123 18:22:09.042633 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"b5705667470a90a588c661bbb5a587e205ba7270bd4e0010a1aca5cb3f5de99c"} Jan 23 18:22:09 crc kubenswrapper[4688]: I0123 18:22:09.043162 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"823123956c1666ca864934e9a5531df1c9782f4c061dd707b8b61d343a7b0efe"} Jan 23 18:22:09 crc kubenswrapper[4688]: I0123 18:22:09.043173 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"c7dfb244c0a4560b9697e84909d08b9c312a34b909332028db655aa0103a8d03"} Jan 23 18:22:09 crc kubenswrapper[4688]: I0123 18:22:09.043199 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"2bb7e5ee2c01b8e9002d10820b6570ac99bd3fd8739493677852e9486dc1c677"} Jan 23 18:22:10 crc kubenswrapper[4688]: I0123 18:22:10.066131 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"07133cfb53cd5376785ca0ac066c2c018ff7295e823e75d97dfb1270eb01e5d3"} Jan 23 18:22:10 crc kubenswrapper[4688]: I0123 18:22:10.066711 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hmn2j" event={"ID":"0b227eb2-6da0-43af-a365-a532ff4e4a86","Type":"ContainerStarted","Data":"90515706a2fd2b193ad1d20da1e30131a623d9a13c3bda77bb92c4c76f3edb95"} Jan 23 18:22:10 crc kubenswrapper[4688]: I0123 18:22:10.066734 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:22:11 crc kubenswrapper[4688]: I0123 18:22:11.937729 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hmn2j" podStartSLOduration=8.366858566 podStartE2EDuration="18.937703358s" podCreationTimestamp="2026-01-23 18:21:53 +0000 UTC" firstStartedPulling="2026-01-23 18:21:54.192795572 +0000 UTC m=+909.188620023" lastFinishedPulling="2026-01-23 18:22:04.763640374 +0000 UTC m=+919.759464815" observedRunningTime="2026-01-23 18:22:10.098695526 +0000 UTC m=+925.094519987" watchObservedRunningTime="2026-01-23 18:22:11.937703358 +0000 UTC m=+926.933527799" Jan 23 18:22:11 crc kubenswrapper[4688]: I0123 18:22:11.942081 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z2jjg"] Jan 23 18:22:11 crc kubenswrapper[4688]: I0123 18:22:11.943071 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:11 crc kubenswrapper[4688]: I0123 18:22:11.945860 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qbvcq" Jan 23 18:22:11 crc kubenswrapper[4688]: I0123 18:22:11.947405 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 18:22:11 crc kubenswrapper[4688]: I0123 18:22:11.948087 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 18:22:11 crc kubenswrapper[4688]: I0123 18:22:11.956522 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z2jjg"] Jan 23 18:22:12 crc kubenswrapper[4688]: I0123 18:22:12.024897 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpptf\" (UniqueName: \"kubernetes.io/projected/491f4103-b520-4b84-9f90-a2d21d168a7a-kube-api-access-cpptf\") pod \"openstack-operator-index-z2jjg\" (UID: \"491f4103-b520-4b84-9f90-a2d21d168a7a\") " pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:12 crc kubenswrapper[4688]: I0123 18:22:12.126632 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpptf\" (UniqueName: \"kubernetes.io/projected/491f4103-b520-4b84-9f90-a2d21d168a7a-kube-api-access-cpptf\") pod \"openstack-operator-index-z2jjg\" (UID: \"491f4103-b520-4b84-9f90-a2d21d168a7a\") " pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:12 crc kubenswrapper[4688]: I0123 18:22:12.154385 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpptf\" (UniqueName: \"kubernetes.io/projected/491f4103-b520-4b84-9f90-a2d21d168a7a-kube-api-access-cpptf\") pod \"openstack-operator-index-z2jjg\" (UID: \"491f4103-b520-4b84-9f90-a2d21d168a7a\") " pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:12 crc kubenswrapper[4688]: I0123 18:22:12.267037 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:12 crc kubenswrapper[4688]: I0123 18:22:12.764367 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z2jjg"] Jan 23 18:22:12 crc kubenswrapper[4688]: W0123 18:22:12.781231 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod491f4103_b520_4b84_9f90_a2d21d168a7a.slice/crio-52367b641da164921548d37e490323dd2d842cf2f9131e7570496ebff548f75d WatchSource:0}: Error finding container 52367b641da164921548d37e490323dd2d842cf2f9131e7570496ebff548f75d: Status 404 returned error can't find the container with id 52367b641da164921548d37e490323dd2d842cf2f9131e7570496ebff548f75d Jan 23 18:22:12 crc kubenswrapper[4688]: I0123 18:22:12.783743 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:22:13 crc kubenswrapper[4688]: I0123 18:22:13.090591 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z2jjg" event={"ID":"491f4103-b520-4b84-9f90-a2d21d168a7a","Type":"ContainerStarted","Data":"52367b641da164921548d37e490323dd2d842cf2f9131e7570496ebff548f75d"} Jan 23 18:22:13 crc kubenswrapper[4688]: I0123 18:22:13.644943 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:22:13 crc kubenswrapper[4688]: I0123 18:22:13.726009 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:22:17 crc kubenswrapper[4688]: I0123 18:22:17.126762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z2jjg" event={"ID":"491f4103-b520-4b84-9f90-a2d21d168a7a","Type":"ContainerStarted","Data":"d36b39a4d2c903b33cf7156973fb115663dc682028ad56d001c8a4641682af49"} Jan 23 18:22:18 crc kubenswrapper[4688]: I0123 18:22:18.158098 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z2jjg" podStartSLOduration=4.497885964 podStartE2EDuration="7.158068006s" podCreationTimestamp="2026-01-23 18:22:11 +0000 UTC" firstStartedPulling="2026-01-23 18:22:12.783417971 +0000 UTC m=+927.779242412" lastFinishedPulling="2026-01-23 18:22:15.443600013 +0000 UTC m=+930.439424454" observedRunningTime="2026-01-23 18:22:18.155417827 +0000 UTC m=+933.151242278" watchObservedRunningTime="2026-01-23 18:22:18.158068006 +0000 UTC m=+933.153892447" Jan 23 18:22:22 crc kubenswrapper[4688]: I0123 18:22:22.267852 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:22 crc kubenswrapper[4688]: I0123 18:22:22.268423 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:22 crc kubenswrapper[4688]: I0123 18:22:22.303203 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:23 crc kubenswrapper[4688]: I0123 18:22:23.337548 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-z2jjg" Jan 23 18:22:23 crc kubenswrapper[4688]: I0123 18:22:23.648133 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hmn2j" Jan 23 18:22:23 crc kubenswrapper[4688]: I0123 18:22:23.666647 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8kldf" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.187756 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx"] Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.189440 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.192741 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-84cjt" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.202348 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx"] Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.326774 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-bundle\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.326841 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-util\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.327510 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb89k\" (UniqueName: \"kubernetes.io/projected/05f53c55-f189-46fa-b193-2efbe87d3356-kube-api-access-tb89k\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.428456 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb89k\" (UniqueName: \"kubernetes.io/projected/05f53c55-f189-46fa-b193-2efbe87d3356-kube-api-access-tb89k\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.428586 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-bundle\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.428615 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-util\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.429651 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-util\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.429847 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-bundle\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.463013 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb89k\" (UniqueName: \"kubernetes.io/projected/05f53c55-f189-46fa-b193-2efbe87d3356-kube-api-access-tb89k\") pod \"2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.515121 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:24 crc kubenswrapper[4688]: I0123 18:22:24.903831 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx"] Jan 23 18:22:25 crc kubenswrapper[4688]: I0123 18:22:25.192633 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" event={"ID":"05f53c55-f189-46fa-b193-2efbe87d3356","Type":"ContainerStarted","Data":"98aa4ee16d55ac5bfd72994d470e64fa6bf26b7aea877de33d2d690082b549d1"} Jan 23 18:22:27 crc kubenswrapper[4688]: I0123 18:22:27.211656 4688 generic.go:334] "Generic (PLEG): container finished" podID="05f53c55-f189-46fa-b193-2efbe87d3356" containerID="962d017807f7f4766c04b8e273ba0084bcfd968af738b2de9ef5547759b2d3c2" exitCode=0 Jan 23 18:22:27 crc kubenswrapper[4688]: I0123 18:22:27.211774 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" event={"ID":"05f53c55-f189-46fa-b193-2efbe87d3356","Type":"ContainerDied","Data":"962d017807f7f4766c04b8e273ba0084bcfd968af738b2de9ef5547759b2d3c2"} Jan 23 18:22:28 crc kubenswrapper[4688]: I0123 18:22:28.227498 4688 generic.go:334] "Generic (PLEG): container finished" podID="05f53c55-f189-46fa-b193-2efbe87d3356" containerID="00947acf228df3ef934d9afbaf8854aac765ae50bd8295a52796eb1d001e999d" exitCode=0 Jan 23 18:22:28 crc kubenswrapper[4688]: I0123 18:22:28.227566 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" event={"ID":"05f53c55-f189-46fa-b193-2efbe87d3356","Type":"ContainerDied","Data":"00947acf228df3ef934d9afbaf8854aac765ae50bd8295a52796eb1d001e999d"} Jan 23 18:22:29 crc kubenswrapper[4688]: I0123 18:22:29.238746 4688 generic.go:334] "Generic (PLEG): container finished" podID="05f53c55-f189-46fa-b193-2efbe87d3356" containerID="e7b3cb9f343e24e598c3ce4d9e4965b2e382cc8e1fa2df17451de309bff44ddc" exitCode=0 Jan 23 18:22:29 crc kubenswrapper[4688]: I0123 18:22:29.238848 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" event={"ID":"05f53c55-f189-46fa-b193-2efbe87d3356","Type":"ContainerDied","Data":"e7b3cb9f343e24e598c3ce4d9e4965b2e382cc8e1fa2df17451de309bff44ddc"} Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.574440 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.737147 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-util\") pod \"05f53c55-f189-46fa-b193-2efbe87d3356\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.737357 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-bundle\") pod \"05f53c55-f189-46fa-b193-2efbe87d3356\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.737420 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb89k\" (UniqueName: \"kubernetes.io/projected/05f53c55-f189-46fa-b193-2efbe87d3356-kube-api-access-tb89k\") pod \"05f53c55-f189-46fa-b193-2efbe87d3356\" (UID: \"05f53c55-f189-46fa-b193-2efbe87d3356\") " Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.738338 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-bundle" (OuterVolumeSpecName: "bundle") pod "05f53c55-f189-46fa-b193-2efbe87d3356" (UID: "05f53c55-f189-46fa-b193-2efbe87d3356"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.745503 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f53c55-f189-46fa-b193-2efbe87d3356-kube-api-access-tb89k" (OuterVolumeSpecName: "kube-api-access-tb89k") pod "05f53c55-f189-46fa-b193-2efbe87d3356" (UID: "05f53c55-f189-46fa-b193-2efbe87d3356"). InnerVolumeSpecName "kube-api-access-tb89k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.752220 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-util" (OuterVolumeSpecName: "util") pod "05f53c55-f189-46fa-b193-2efbe87d3356" (UID: "05f53c55-f189-46fa-b193-2efbe87d3356"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.839887 4688 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-util\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.839972 4688 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/05f53c55-f189-46fa-b193-2efbe87d3356-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:30 crc kubenswrapper[4688]: I0123 18:22:30.839983 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb89k\" (UniqueName: \"kubernetes.io/projected/05f53c55-f189-46fa-b193-2efbe87d3356-kube-api-access-tb89k\") on node \"crc\" DevicePath \"\"" Jan 23 18:22:31 crc kubenswrapper[4688]: I0123 18:22:31.263661 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" Jan 23 18:22:31 crc kubenswrapper[4688]: I0123 18:22:31.263718 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx" event={"ID":"05f53c55-f189-46fa-b193-2efbe87d3356","Type":"ContainerDied","Data":"98aa4ee16d55ac5bfd72994d470e64fa6bf26b7aea877de33d2d690082b549d1"} Jan 23 18:22:31 crc kubenswrapper[4688]: I0123 18:22:31.263815 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98aa4ee16d55ac5bfd72994d470e64fa6bf26b7aea877de33d2d690082b549d1" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.120809 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt"] Jan 23 18:22:34 crc kubenswrapper[4688]: E0123 18:22:34.121840 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f53c55-f189-46fa-b193-2efbe87d3356" containerName="util" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.121864 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f53c55-f189-46fa-b193-2efbe87d3356" containerName="util" Jan 23 18:22:34 crc kubenswrapper[4688]: E0123 18:22:34.121874 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f53c55-f189-46fa-b193-2efbe87d3356" containerName="pull" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.121885 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f53c55-f189-46fa-b193-2efbe87d3356" containerName="pull" Jan 23 18:22:34 crc kubenswrapper[4688]: E0123 18:22:34.121901 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f53c55-f189-46fa-b193-2efbe87d3356" containerName="extract" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.121911 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f53c55-f189-46fa-b193-2efbe87d3356" containerName="extract" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.122108 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f53c55-f189-46fa-b193-2efbe87d3356" containerName="extract" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.122944 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.125731 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-f7fjn" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.205340 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8lnp\" (UniqueName: \"kubernetes.io/projected/a7210d87-1894-4295-b8bd-0189ea05db2c-kube-api-access-d8lnp\") pod \"openstack-operator-controller-init-68b845cd55-nswgt\" (UID: \"a7210d87-1894-4295-b8bd-0189ea05db2c\") " pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.217438 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt"] Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.307074 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8lnp\" (UniqueName: \"kubernetes.io/projected/a7210d87-1894-4295-b8bd-0189ea05db2c-kube-api-access-d8lnp\") pod \"openstack-operator-controller-init-68b845cd55-nswgt\" (UID: \"a7210d87-1894-4295-b8bd-0189ea05db2c\") " pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.335422 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8lnp\" (UniqueName: \"kubernetes.io/projected/a7210d87-1894-4295-b8bd-0189ea05db2c-kube-api-access-d8lnp\") pod \"openstack-operator-controller-init-68b845cd55-nswgt\" (UID: \"a7210d87-1894-4295-b8bd-0189ea05db2c\") " pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.450621 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" Jan 23 18:22:34 crc kubenswrapper[4688]: I0123 18:22:34.949455 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt"] Jan 23 18:22:35 crc kubenswrapper[4688]: I0123 18:22:35.304323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" event={"ID":"a7210d87-1894-4295-b8bd-0189ea05db2c","Type":"ContainerStarted","Data":"90051ed1491b91bf986824ab8eb5be805b4a0ff89907a0e084ef8ea7f18c3cab"} Jan 23 18:22:40 crc kubenswrapper[4688]: I0123 18:22:40.363607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" event={"ID":"a7210d87-1894-4295-b8bd-0189ea05db2c","Type":"ContainerStarted","Data":"2dd2cd4024fd8edb06b5c57d0c24b35cf27d982c538b29f0d9f94a3f95120c25"} Jan 23 18:22:40 crc kubenswrapper[4688]: I0123 18:22:40.364422 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" Jan 23 18:22:40 crc kubenswrapper[4688]: I0123 18:22:40.393916 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" podStartSLOduration=1.519246972 podStartE2EDuration="6.393885918s" podCreationTimestamp="2026-01-23 18:22:34 +0000 UTC" firstStartedPulling="2026-01-23 18:22:34.966337682 +0000 UTC m=+949.962162143" lastFinishedPulling="2026-01-23 18:22:39.840976648 +0000 UTC m=+954.836801089" observedRunningTime="2026-01-23 18:22:40.390476175 +0000 UTC m=+955.386300626" watchObservedRunningTime="2026-01-23 18:22:40.393885918 +0000 UTC m=+955.389710359" Jan 23 18:22:54 crc kubenswrapper[4688]: I0123 18:22:54.454641 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-68b845cd55-nswgt" Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.881339 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh"] Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.883318 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.892703 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dzjgd" Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.900322 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh"] Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.910828 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k"] Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.911877 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.914072 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wdbp4" Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.963288 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj"] Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.964596 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.970716 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-k628w" Jan 23 18:23:13 crc kubenswrapper[4688]: I0123 18:23:13.985109 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.001283 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.003621 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.006176 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-gq5sk" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.009003 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.010882 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9l8\" (UniqueName: \"kubernetes.io/projected/9c6839a5-f543-42e6-8c94-7138c1200112-kube-api-access-8h9l8\") pod \"cinder-operator-controller-manager-69cf5d4557-rmt2k\" (UID: \"9c6839a5-f543-42e6-8c94-7138c1200112\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.010961 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g286h\" (UniqueName: \"kubernetes.io/projected/bd62301c-d101-483c-8fe3-a1a5eddee7fc-kube-api-access-g286h\") pod \"barbican-operator-controller-manager-7f86f8796f-2qzlh\" (UID: \"bd62301c-d101-483c-8fe3-a1a5eddee7fc\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.034196 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.044285 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.045833 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.049082 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lqdj6" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.095307 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.112535 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.114044 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.121431 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9l8\" (UniqueName: \"kubernetes.io/projected/9c6839a5-f543-42e6-8c94-7138c1200112-kube-api-access-8h9l8\") pod \"cinder-operator-controller-manager-69cf5d4557-rmt2k\" (UID: \"9c6839a5-f543-42e6-8c94-7138c1200112\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.121512 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvcz\" (UniqueName: \"kubernetes.io/projected/e9c016a5-4953-4944-9f6e-f086e5a70918-kube-api-access-wvvcz\") pod \"designate-operator-controller-manager-b45d7bf98-wz5qj\" (UID: \"e9c016a5-4953-4944-9f6e-f086e5a70918\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.121712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kxj9\" (UniqueName: \"kubernetes.io/projected/9ac53122-55ee-4db4-ad7c-8369e5117efe-kube-api-access-9kxj9\") pod \"glance-operator-controller-manager-78fdd796fd-q56fh\" (UID: \"9ac53122-55ee-4db4-ad7c-8369e5117efe\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.121760 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g286h\" (UniqueName: \"kubernetes.io/projected/bd62301c-d101-483c-8fe3-a1a5eddee7fc-kube-api-access-g286h\") pod \"barbican-operator-controller-manager-7f86f8796f-2qzlh\" (UID: \"bd62301c-d101-483c-8fe3-a1a5eddee7fc\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.152056 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pq4d6" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.234429 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6zpq\" (UniqueName: \"kubernetes.io/projected/be846838-ce35-4c14-a0ea-3a501d4ef6ac-kube-api-access-d6zpq\") pod \"heat-operator-controller-manager-594c8c9d5d-v4qgl\" (UID: \"be846838-ce35-4c14-a0ea-3a501d4ef6ac\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.234557 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvcz\" (UniqueName: \"kubernetes.io/projected/e9c016a5-4953-4944-9f6e-f086e5a70918-kube-api-access-wvvcz\") pod \"designate-operator-controller-manager-b45d7bf98-wz5qj\" (UID: \"e9c016a5-4953-4944-9f6e-f086e5a70918\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.234614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kxj9\" (UniqueName: \"kubernetes.io/projected/9ac53122-55ee-4db4-ad7c-8369e5117efe-kube-api-access-9kxj9\") pod \"glance-operator-controller-manager-78fdd796fd-q56fh\" (UID: \"9ac53122-55ee-4db4-ad7c-8369e5117efe\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.234647 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bm6b\" (UniqueName: \"kubernetes.io/projected/e53011a2-ea48-49f2-afbc-0d4bf71ae725-kube-api-access-4bm6b\") pod \"horizon-operator-controller-manager-77d5c5b54f-wt2bv\" (UID: \"e53011a2-ea48-49f2-afbc-0d4bf71ae725\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.236132 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g286h\" (UniqueName: \"kubernetes.io/projected/bd62301c-d101-483c-8fe3-a1a5eddee7fc-kube-api-access-g286h\") pod \"barbican-operator-controller-manager-7f86f8796f-2qzlh\" (UID: \"bd62301c-d101-483c-8fe3-a1a5eddee7fc\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.243073 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9l8\" (UniqueName: \"kubernetes.io/projected/9c6839a5-f543-42e6-8c94-7138c1200112-kube-api-access-8h9l8\") pod \"cinder-operator-controller-manager-69cf5d4557-rmt2k\" (UID: \"9c6839a5-f543-42e6-8c94-7138c1200112\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.244500 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.246831 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.254774 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ld764" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.255026 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.277174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kxj9\" (UniqueName: \"kubernetes.io/projected/9ac53122-55ee-4db4-ad7c-8369e5117efe-kube-api-access-9kxj9\") pod \"glance-operator-controller-manager-78fdd796fd-q56fh\" (UID: \"9ac53122-55ee-4db4-ad7c-8369e5117efe\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.293554 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvcz\" (UniqueName: \"kubernetes.io/projected/e9c016a5-4953-4944-9f6e-f086e5a70918-kube-api-access-wvvcz\") pod \"designate-operator-controller-manager-b45d7bf98-wz5qj\" (UID: \"e9c016a5-4953-4944-9f6e-f086e5a70918\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.302377 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.317607 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.319018 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.321874 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-l6gzd" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.327914 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.332651 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.335447 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.335568 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bm6b\" (UniqueName: \"kubernetes.io/projected/e53011a2-ea48-49f2-afbc-0d4bf71ae725-kube-api-access-4bm6b\") pod \"horizon-operator-controller-manager-77d5c5b54f-wt2bv\" (UID: \"e53011a2-ea48-49f2-afbc-0d4bf71ae725\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.335614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6zpq\" (UniqueName: \"kubernetes.io/projected/be846838-ce35-4c14-a0ea-3a501d4ef6ac-kube-api-access-d6zpq\") pod \"heat-operator-controller-manager-594c8c9d5d-v4qgl\" (UID: \"be846838-ce35-4c14-a0ea-3a501d4ef6ac\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.335638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwqnl\" (UniqueName: \"kubernetes.io/projected/cae5b14f-5f7e-477f-a17a-9ad3930c6862-kube-api-access-hwqnl\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.393854 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6zpq\" (UniqueName: \"kubernetes.io/projected/be846838-ce35-4c14-a0ea-3a501d4ef6ac-kube-api-access-d6zpq\") pod \"heat-operator-controller-manager-594c8c9d5d-v4qgl\" (UID: \"be846838-ce35-4c14-a0ea-3a501d4ef6ac\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.395175 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bm6b\" (UniqueName: \"kubernetes.io/projected/e53011a2-ea48-49f2-afbc-0d4bf71ae725-kube-api-access-4bm6b\") pod \"horizon-operator-controller-manager-77d5c5b54f-wt2bv\" (UID: \"e53011a2-ea48-49f2-afbc-0d4bf71ae725\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.427078 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.438161 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvcz\" (UniqueName: \"kubernetes.io/projected/30cd4339-ab66-45e3-937d-b3d9b5c3ef62-kube-api-access-9xvcz\") pod \"ironic-operator-controller-manager-598f7747c9-ztl8x\" (UID: \"30cd4339-ab66-45e3-937d-b3d9b5c3ef62\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.438277 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwqnl\" (UniqueName: \"kubernetes.io/projected/cae5b14f-5f7e-477f-a17a-9ad3930c6862-kube-api-access-hwqnl\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.438303 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:14 crc kubenswrapper[4688]: E0123 18:23:14.438475 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:14 crc kubenswrapper[4688]: E0123 18:23:14.438536 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert podName:cae5b14f-5f7e-477f-a17a-9ad3930c6862 nodeName:}" failed. No retries permitted until 2026-01-23 18:23:14.938513342 +0000 UTC m=+989.934337783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert") pod "infra-operator-controller-manager-58749ffdfb-q4wv8" (UID: "cae5b14f-5f7e-477f-a17a-9ad3930c6862") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.440657 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.441923 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.450558 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-r4lmr" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.450841 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.457338 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.458555 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.461177 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cbwcv" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.468333 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.470723 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwqnl\" (UniqueName: \"kubernetes.io/projected/cae5b14f-5f7e-477f-a17a-9ad3930c6862-kube-api-access-hwqnl\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.484124 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.505920 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.506156 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.508411 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.511649 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9mmql" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.533665 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.540386 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndhxg\" (UniqueName: \"kubernetes.io/projected/b0ecc6d1-2625-4fba-860a-3931984ec27a-kube-api-access-ndhxg\") pod \"keystone-operator-controller-manager-b8b6d4659-kjh92\" (UID: \"b0ecc6d1-2625-4fba-860a-3931984ec27a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.540604 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvcz\" (UniqueName: \"kubernetes.io/projected/30cd4339-ab66-45e3-937d-b3d9b5c3ef62-kube-api-access-9xvcz\") pod \"ironic-operator-controller-manager-598f7747c9-ztl8x\" (UID: \"30cd4339-ab66-45e3-937d-b3d9b5c3ef62\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.540660 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbttd\" (UniqueName: \"kubernetes.io/projected/6daaa808-ea3a-43fb-bff1-285cf870df77-kube-api-access-qbttd\") pod \"manila-operator-controller-manager-78c6999f6f-q6tnb\" (UID: \"6daaa808-ea3a-43fb-bff1-285cf870df77\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.544116 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.552245 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.553409 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.557058 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pds9g" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.558429 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.569658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvcz\" (UniqueName: \"kubernetes.io/projected/30cd4339-ab66-45e3-937d-b3d9b5c3ef62-kube-api-access-9xvcz\") pod \"ironic-operator-controller-manager-598f7747c9-ztl8x\" (UID: \"30cd4339-ab66-45e3-937d-b3d9b5c3ef62\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.573333 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.579541 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.591496 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.592155 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-pbkjq" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.594302 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.600366 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.603273 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w6pkl" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.647149 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/676572f9-6a9f-4a4e-ae4c-8d8d300bf02a-kube-api-access-rjwsj\") pod \"nova-operator-controller-manager-6b8bc8d87d-k2g2j\" (UID: \"676572f9-6a9f-4a4e-ae4c-8d8d300bf02a\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.647283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbttd\" (UniqueName: \"kubernetes.io/projected/6daaa808-ea3a-43fb-bff1-285cf870df77-kube-api-access-qbttd\") pod \"manila-operator-controller-manager-78c6999f6f-q6tnb\" (UID: \"6daaa808-ea3a-43fb-bff1-285cf870df77\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.647335 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndhxg\" (UniqueName: \"kubernetes.io/projected/b0ecc6d1-2625-4fba-860a-3931984ec27a-kube-api-access-ndhxg\") pod \"keystone-operator-controller-manager-b8b6d4659-kjh92\" (UID: \"b0ecc6d1-2625-4fba-860a-3931984ec27a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.647358 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvqcc\" (UniqueName: \"kubernetes.io/projected/5e61a329-1ac1-4162-9d68-f3086ec3f16e-kube-api-access-jvqcc\") pod \"neutron-operator-controller-manager-78d58447c5-47x6q\" (UID: \"5e61a329-1ac1-4162-9d68-f3086ec3f16e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.647380 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmshf\" (UniqueName: \"kubernetes.io/projected/4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f-kube-api-access-nmshf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6\" (UID: \"4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.651162 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.665315 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.670244 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.686868 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.690512 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndhxg\" (UniqueName: \"kubernetes.io/projected/b0ecc6d1-2625-4fba-860a-3931984ec27a-kube-api-access-ndhxg\") pod \"keystone-operator-controller-manager-b8b6d4659-kjh92\" (UID: \"b0ecc6d1-2625-4fba-860a-3931984ec27a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.694097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbttd\" (UniqueName: \"kubernetes.io/projected/6daaa808-ea3a-43fb-bff1-285cf870df77-kube-api-access-qbttd\") pod \"manila-operator-controller-manager-78c6999f6f-q6tnb\" (UID: \"6daaa808-ea3a-43fb-bff1-285cf870df77\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.717120 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.720207 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.734209 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hhd5s" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.747556 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.749983 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rk6d\" (UniqueName: \"kubernetes.io/projected/1232d539-d6e5-4aa6-ac00-36be9120b247-kube-api-access-7rk6d\") pod \"octavia-operator-controller-manager-7bd9774b6-mq2kk\" (UID: \"1232d539-d6e5-4aa6-ac00-36be9120b247\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.750059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/676572f9-6a9f-4a4e-ae4c-8d8d300bf02a-kube-api-access-rjwsj\") pod \"nova-operator-controller-manager-6b8bc8d87d-k2g2j\" (UID: \"676572f9-6a9f-4a4e-ae4c-8d8d300bf02a\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.750136 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvqcc\" (UniqueName: \"kubernetes.io/projected/5e61a329-1ac1-4162-9d68-f3086ec3f16e-kube-api-access-jvqcc\") pod \"neutron-operator-controller-manager-78d58447c5-47x6q\" (UID: \"5e61a329-1ac1-4162-9d68-f3086ec3f16e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.750161 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmshf\" (UniqueName: \"kubernetes.io/projected/4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f-kube-api-access-nmshf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6\" (UID: \"4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.779599 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.781953 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvqcc\" (UniqueName: \"kubernetes.io/projected/5e61a329-1ac1-4162-9d68-f3086ec3f16e-kube-api-access-jvqcc\") pod \"neutron-operator-controller-manager-78d58447c5-47x6q\" (UID: \"5e61a329-1ac1-4162-9d68-f3086ec3f16e\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.785952 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.787268 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.788889 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/676572f9-6a9f-4a4e-ae4c-8d8d300bf02a-kube-api-access-rjwsj\") pod \"nova-operator-controller-manager-6b8bc8d87d-k2g2j\" (UID: \"676572f9-6a9f-4a4e-ae4c-8d8d300bf02a\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.793910 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-h6p4j" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.794945 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmshf\" (UniqueName: \"kubernetes.io/projected/4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f-kube-api-access-nmshf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6\" (UID: \"4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.799906 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.801765 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.803390 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.805801 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.808244 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-gcmxb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.819239 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.833515 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.834981 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.840426 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-cfwbg" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.841149 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.848428 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.852741 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.853901 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.853968 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdch9\" (UniqueName: \"kubernetes.io/projected/af851c54-521b-4a32-95fd-df9fd55d2eee-kube-api-access-jdch9\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.854006 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rk6d\" (UniqueName: \"kubernetes.io/projected/1232d539-d6e5-4aa6-ac00-36be9120b247-kube-api-access-7rk6d\") pod \"octavia-operator-controller-manager-7bd9774b6-mq2kk\" (UID: \"1232d539-d6e5-4aa6-ac00-36be9120b247\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.854050 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7nc\" (UniqueName: \"kubernetes.io/projected/f277821c-c358-4283-ad35-61b187fb0878-kube-api-access-fn7nc\") pod \"ovn-operator-controller-manager-55db956ddc-6xgwb\" (UID: \"f277821c-c358-4283-ad35-61b187fb0878\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.854095 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lftxm\" (UniqueName: \"kubernetes.io/projected/f53bddcc-3d14-4066-980c-dcfa14f2965e-kube-api-access-lftxm\") pod \"placement-operator-controller-manager-5d646b7d76-zk9c9\" (UID: \"f53bddcc-3d14-4066-980c-dcfa14f2965e\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.854854 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q6p6f" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.859206 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.864887 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.914416 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.918340 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rk6d\" (UniqueName: \"kubernetes.io/projected/1232d539-d6e5-4aa6-ac00-36be9120b247-kube-api-access-7rk6d\") pod \"octavia-operator-controller-manager-7bd9774b6-mq2kk\" (UID: \"1232d539-d6e5-4aa6-ac00-36be9120b247\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.923457 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.951902 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj"] Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.966426 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.966559 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfz4d\" (UniqueName: \"kubernetes.io/projected/b058c042-b4f7-4470-82ec-4f5336b47992-kube-api-access-nfz4d\") pod \"swift-operator-controller-manager-547cbdb99f-9p6ps\" (UID: \"b058c042-b4f7-4470-82ec-4f5336b47992\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.966654 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6d2p\" (UniqueName: \"kubernetes.io/projected/55bb8a6a-0401-4cdc-92fb-595c5eeb5e55-kube-api-access-c6d2p\") pod \"telemetry-operator-controller-manager-85cd9769bb-k6hng\" (UID: \"55bb8a6a-0401-4cdc-92fb-595c5eeb5e55\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.966693 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.966752 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdch9\" (UniqueName: \"kubernetes.io/projected/af851c54-521b-4a32-95fd-df9fd55d2eee-kube-api-access-jdch9\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.966814 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7nc\" (UniqueName: \"kubernetes.io/projected/f277821c-c358-4283-ad35-61b187fb0878-kube-api-access-fn7nc\") pod \"ovn-operator-controller-manager-55db956ddc-6xgwb\" (UID: \"f277821c-c358-4283-ad35-61b187fb0878\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" Jan 23 18:23:14 crc kubenswrapper[4688]: I0123 18:23:14.966899 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lftxm\" (UniqueName: \"kubernetes.io/projected/f53bddcc-3d14-4066-980c-dcfa14f2965e-kube-api-access-lftxm\") pod \"placement-operator-controller-manager-5d646b7d76-zk9c9\" (UID: \"f53bddcc-3d14-4066-980c-dcfa14f2965e\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" Jan 23 18:23:14 crc kubenswrapper[4688]: E0123 18:23:14.981975 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:14 crc kubenswrapper[4688]: E0123 18:23:14.982203 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert podName:cae5b14f-5f7e-477f-a17a-9ad3930c6862 nodeName:}" failed. No retries permitted until 2026-01-23 18:23:15.982086692 +0000 UTC m=+990.977911133 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert") pod "infra-operator-controller-manager-58749ffdfb-q4wv8" (UID: "cae5b14f-5f7e-477f-a17a-9ad3930c6862") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:14 crc kubenswrapper[4688]: E0123 18:23:14.982667 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:14 crc kubenswrapper[4688]: E0123 18:23:14.982728 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert podName:af851c54-521b-4a32-95fd-df9fd55d2eee nodeName:}" failed. No retries permitted until 2026-01-23 18:23:15.482711837 +0000 UTC m=+990.478536278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" (UID: "af851c54-521b-4a32-95fd-df9fd55d2eee") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.002357 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.015708 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.020771 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.021482 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nb7hw" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.023086 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdch9\" (UniqueName: \"kubernetes.io/projected/af851c54-521b-4a32-95fd-df9fd55d2eee-kube-api-access-jdch9\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.031175 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7nc\" (UniqueName: \"kubernetes.io/projected/f277821c-c358-4283-ad35-61b187fb0878-kube-api-access-fn7nc\") pod \"ovn-operator-controller-manager-55db956ddc-6xgwb\" (UID: \"f277821c-c358-4283-ad35-61b187fb0878\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.040037 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lftxm\" (UniqueName: \"kubernetes.io/projected/f53bddcc-3d14-4066-980c-dcfa14f2965e-kube-api-access-lftxm\") pod \"placement-operator-controller-manager-5d646b7d76-zk9c9\" (UID: \"f53bddcc-3d14-4066-980c-dcfa14f2965e\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.046578 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.087116 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.091465 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.092561 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfz4d\" (UniqueName: \"kubernetes.io/projected/b058c042-b4f7-4470-82ec-4f5336b47992-kube-api-access-nfz4d\") pod \"swift-operator-controller-manager-547cbdb99f-9p6ps\" (UID: \"b058c042-b4f7-4470-82ec-4f5336b47992\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.092595 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6d2p\" (UniqueName: \"kubernetes.io/projected/55bb8a6a-0401-4cdc-92fb-595c5eeb5e55-kube-api-access-c6d2p\") pod \"telemetry-operator-controller-manager-85cd9769bb-k6hng\" (UID: \"55bb8a6a-0401-4cdc-92fb-595c5eeb5e55\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.092630 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j94c\" (UniqueName: \"kubernetes.io/projected/6e8fb123-6d73-47c6-9d23-930c6ba3de69-kube-api-access-6j94c\") pod \"test-operator-controller-manager-69797bbcbd-l59kj\" (UID: \"6e8fb123-6d73-47c6-9d23-930c6ba3de69\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.092817 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.097875 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f9krb" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.114240 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.123044 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6d2p\" (UniqueName: \"kubernetes.io/projected/55bb8a6a-0401-4cdc-92fb-595c5eeb5e55-kube-api-access-c6d2p\") pod \"telemetry-operator-controller-manager-85cd9769bb-k6hng\" (UID: \"55bb8a6a-0401-4cdc-92fb-595c5eeb5e55\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.128915 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfz4d\" (UniqueName: \"kubernetes.io/projected/b058c042-b4f7-4470-82ec-4f5336b47992-kube-api-access-nfz4d\") pod \"swift-operator-controller-manager-547cbdb99f-9p6ps\" (UID: \"b058c042-b4f7-4470-82ec-4f5336b47992\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.134905 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.181401 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.182916 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.189619 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4qr8c" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.189963 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.190286 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.194458 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j94c\" (UniqueName: \"kubernetes.io/projected/6e8fb123-6d73-47c6-9d23-930c6ba3de69-kube-api-access-6j94c\") pod \"test-operator-controller-manager-69797bbcbd-l59kj\" (UID: \"6e8fb123-6d73-47c6-9d23-930c6ba3de69\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.194545 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrgnj\" (UniqueName: \"kubernetes.io/projected/26066212-ab72-4450-b9b3-b08e6b43e333-kube-api-access-hrgnj\") pod \"watcher-operator-controller-manager-679dc965c9-qrkxl\" (UID: \"26066212-ab72-4450-b9b3-b08e6b43e333\") " pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.207365 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.221224 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.224134 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j94c\" (UniqueName: \"kubernetes.io/projected/6e8fb123-6d73-47c6-9d23-930c6ba3de69-kube-api-access-6j94c\") pod \"test-operator-controller-manager-69797bbcbd-l59kj\" (UID: \"6e8fb123-6d73-47c6-9d23-930c6ba3de69\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.233252 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.234666 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.238849 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6hn4l" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.247380 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.247932 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.296465 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grbc\" (UniqueName: \"kubernetes.io/projected/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-kube-api-access-8grbc\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.296551 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrz5s\" (UniqueName: \"kubernetes.io/projected/8d9bd4af-849d-417f-9bbd-8e661b88d557-kube-api-access-nrz5s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qlqcd\" (UID: \"8d9bd4af-849d-417f-9bbd-8e661b88d557\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.296632 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.296712 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrgnj\" (UniqueName: \"kubernetes.io/projected/26066212-ab72-4450-b9b3-b08e6b43e333-kube-api-access-hrgnj\") pod \"watcher-operator-controller-manager-679dc965c9-qrkxl\" (UID: \"26066212-ab72-4450-b9b3-b08e6b43e333\") " pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.296805 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.309517 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.321615 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrgnj\" (UniqueName: \"kubernetes.io/projected/26066212-ab72-4450-b9b3-b08e6b43e333-kube-api-access-hrgnj\") pod \"watcher-operator-controller-manager-679dc965c9-qrkxl\" (UID: \"26066212-ab72-4450-b9b3-b08e6b43e333\") " pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.330264 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.385049 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.406869 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.407438 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grbc\" (UniqueName: \"kubernetes.io/projected/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-kube-api-access-8grbc\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.407495 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrz5s\" (UniqueName: \"kubernetes.io/projected/8d9bd4af-849d-417f-9bbd-8e661b88d557-kube-api-access-nrz5s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qlqcd\" (UID: \"8d9bd4af-849d-417f-9bbd-8e661b88d557\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.407554 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.407799 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.407883 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:15.907856928 +0000 UTC m=+990.903681369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "metrics-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.408259 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.408315 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:15.908289778 +0000 UTC m=+990.904114219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "webhook-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.434637 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grbc\" (UniqueName: \"kubernetes.io/projected/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-kube-api-access-8grbc\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.438171 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrz5s\" (UniqueName: \"kubernetes.io/projected/8d9bd4af-849d-417f-9bbd-8e661b88d557-kube-api-access-nrz5s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qlqcd\" (UID: \"8d9bd4af-849d-417f-9bbd-8e661b88d557\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.467721 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.509717 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh"] Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.515275 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.515578 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.515652 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert podName:af851c54-521b-4a32-95fd-df9fd55d2eee nodeName:}" failed. No retries permitted until 2026-01-23 18:23:16.515629999 +0000 UTC m=+991.511454440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" (UID: "af851c54-521b-4a32-95fd-df9fd55d2eee") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.735952 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" event={"ID":"e53011a2-ea48-49f2-afbc-0d4bf71ae725","Type":"ContainerStarted","Data":"46cf097b0934ad40f991eb7ba21437e2c51c263f8e034d4493bcb0d7aca95b3d"} Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.740616 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" event={"ID":"9ac53122-55ee-4db4-ad7c-8369e5117efe","Type":"ContainerStarted","Data":"aa905bb9b47eaa1c7bfed272461555485e2ffda11f618cf623e4d9212aae836e"} Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.776807 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.932353 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: I0123 18:23:15.932493 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.932696 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.932765 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:16.932742536 +0000 UTC m=+991.928566967 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "metrics-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.932818 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:23:15 crc kubenswrapper[4688]: E0123 18:23:15.932845 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:16.932836409 +0000 UTC m=+991.928660850 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "webhook-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.037011 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.037386 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.037466 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert podName:cae5b14f-5f7e-477f-a17a-9ad3930c6862 nodeName:}" failed. No retries permitted until 2026-01-23 18:23:18.037443634 +0000 UTC m=+993.033268075 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert") pod "infra-operator-controller-manager-58749ffdfb-q4wv8" (UID: "cae5b14f-5f7e-477f-a17a-9ad3930c6862") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.144555 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.162500 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.243157 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.319582 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.354756 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.527558 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.569427 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.569598 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.569672 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert podName:af851c54-521b-4a32-95fd-df9fd55d2eee nodeName:}" failed. No retries permitted until 2026-01-23 18:23:18.569649909 +0000 UTC m=+993.565474350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" (UID: "af851c54-521b-4a32-95fd-df9fd55d2eee") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.653054 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.671134 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j"] Jan 23 18:23:16 crc kubenswrapper[4688]: W0123 18:23:16.689501 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55bb8a6a_0401_4cdc_92fb_595c5eeb5e55.slice/crio-497686f8d94f8fd25113e6acb56e16ce36e11d8d42a78b17f4967e9a01b63c9a WatchSource:0}: Error finding container 497686f8d94f8fd25113e6acb56e16ce36e11d8d42a78b17f4967e9a01b63c9a: Status 404 returned error can't find the container with id 497686f8d94f8fd25113e6acb56e16ce36e11d8d42a78b17f4967e9a01b63c9a Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.689872 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj"] Jan 23 18:23:16 crc kubenswrapper[4688]: W0123 18:23:16.690116 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod676572f9_6a9f_4a4e_ae4c_8d8d300bf02a.slice/crio-a80eeb84c628c00d237f5a282cf2107b217b6d45a222cd8535b6a68ee46ab44c WatchSource:0}: Error finding container a80eeb84c628c00d237f5a282cf2107b217b6d45a222cd8535b6a68ee46ab44c: Status 404 returned error can't find the container with id a80eeb84c628c00d237f5a282cf2107b217b6d45a222cd8535b6a68ee46ab44c Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.702324 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.705286 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.773237 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" event={"ID":"b0ecc6d1-2625-4fba-860a-3931984ec27a","Type":"ContainerStarted","Data":"de1e35d8de911294b9a29770c34a7c4e4615810e8236b138f68c3ce96aa0674a"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.780895 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" event={"ID":"55bb8a6a-0401-4cdc-92fb-595c5eeb5e55","Type":"ContainerStarted","Data":"497686f8d94f8fd25113e6acb56e16ce36e11d8d42a78b17f4967e9a01b63c9a"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.782598 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" event={"ID":"bd62301c-d101-483c-8fe3-a1a5eddee7fc","Type":"ContainerStarted","Data":"6e541fb8b6d4e5e680763f7e0e133fc36d0eaf0536bb0095317ad4c02c8a83ee"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.783623 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" event={"ID":"9c6839a5-f543-42e6-8c94-7138c1200112","Type":"ContainerStarted","Data":"4ff4139a118b0505ffe477b4c793f350b09edc12691e2287a0a8986dcbbb6ba8"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.784811 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" event={"ID":"4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f","Type":"ContainerStarted","Data":"76fdff63d0d2d148358de8b2e42afa22027cb08649ddcb42d321f83b1b5d70b0"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.785844 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" event={"ID":"e9c016a5-4953-4944-9f6e-f086e5a70918","Type":"ContainerStarted","Data":"d4ea42e79dc51fc3843383d926579b21b897013332b3201aa20971c694206511"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.786671 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" event={"ID":"6e8fb123-6d73-47c6-9d23-930c6ba3de69","Type":"ContainerStarted","Data":"d92928d12c018a514f2a22103882ac37714558cf3390fe0f58ff4503d6cd28fa"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.789514 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" event={"ID":"30cd4339-ab66-45e3-937d-b3d9b5c3ef62","Type":"ContainerStarted","Data":"7bf5817b5219645793e71aadfae0bf465f65a5eb06391d8425d6f14c7bdbb31c"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.790832 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" event={"ID":"6daaa808-ea3a-43fb-bff1-285cf870df77","Type":"ContainerStarted","Data":"51b77dd76a56d8ab14923452141e10e9d2dd2cc1afd09c2213b31d05aec83a7d"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.791760 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" event={"ID":"676572f9-6a9f-4a4e-ae4c-8d8d300bf02a","Type":"ContainerStarted","Data":"a80eeb84c628c00d237f5a282cf2107b217b6d45a222cd8535b6a68ee46ab44c"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.792727 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" event={"ID":"be846838-ce35-4c14-a0ea-3a501d4ef6ac","Type":"ContainerStarted","Data":"a469b0078d0bb852233fc5ac23d00f4262690888bea54884f19855a1ce642aa9"} Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.900810 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb"] Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.978026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:16 crc kubenswrapper[4688]: I0123 18:23:16.978221 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.978436 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.978519 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:18.978495346 +0000 UTC m=+993.974319787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "metrics-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.979000 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:23:16 crc kubenswrapper[4688]: E0123 18:23:16.979028 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:18.979019728 +0000 UTC m=+993.974844169 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "webhook-server-cert" not found Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.078023 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk"] Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.087467 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd"] Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.100522 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps"] Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.110508 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl"] Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.123549 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9"] Jan 23 18:23:17 crc kubenswrapper[4688]: W0123 18:23:17.123982 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf53bddcc_3d14_4066_980c_dcfa14f2965e.slice/crio-4eb703201637240e15b804a26125aa28a2eefe0740f3c11f045300f41d796445 WatchSource:0}: Error finding container 4eb703201637240e15b804a26125aa28a2eefe0740f3c11f045300f41d796445: Status 404 returned error can't find the container with id 4eb703201637240e15b804a26125aa28a2eefe0740f3c11f045300f41d796445 Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.129780 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nfz4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-9p6ps_openstack-operators(b058c042-b4f7-4470-82ec-4f5336b47992): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.135521 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" podUID="b058c042-b4f7-4470-82ec-4f5336b47992" Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.139547 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.35:5001/openstack-k8s-operators/watcher-operator:557991b31682102cc5465466dd6466fe516ca0b9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrgnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-679dc965c9-qrkxl_openstack-operators(26066212-ab72-4450-b9b3-b08e6b43e333): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.140446 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lftxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-zk9c9_openstack-operators(f53bddcc-3d14-4066-980c-dcfa14f2965e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.140672 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" podUID="26066212-ab72-4450-b9b3-b08e6b43e333" Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.141944 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" podUID="f53bddcc-3d14-4066-980c-dcfa14f2965e" Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.811381 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" event={"ID":"b058c042-b4f7-4470-82ec-4f5336b47992","Type":"ContainerStarted","Data":"c5f7f0b4a52c2bc4b7e4619944913266d874b440713d02253a014a416cf9c2df"} Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.815847 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" podUID="b058c042-b4f7-4470-82ec-4f5336b47992" Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.819604 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" event={"ID":"f277821c-c358-4283-ad35-61b187fb0878","Type":"ContainerStarted","Data":"39397e2417099d42a305a3d6e76ce798f2263d9e6efa148c94af3e339c6c1695"} Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.831547 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" event={"ID":"8d9bd4af-849d-417f-9bbd-8e661b88d557","Type":"ContainerStarted","Data":"98e2a6f591c27d0f274875aec83f630d2e3356a2b31b8511eb453358b713da26"} Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.838180 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" event={"ID":"f53bddcc-3d14-4066-980c-dcfa14f2965e","Type":"ContainerStarted","Data":"4eb703201637240e15b804a26125aa28a2eefe0740f3c11f045300f41d796445"} Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.848000 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" podUID="f53bddcc-3d14-4066-980c-dcfa14f2965e" Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.855087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" event={"ID":"26066212-ab72-4450-b9b3-b08e6b43e333","Type":"ContainerStarted","Data":"296630fb2a3720fa01e4136f659ade3cc58575db51dd03964d805a92de5b7288"} Jan 23 18:23:17 crc kubenswrapper[4688]: E0123 18:23:17.861744 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.35:5001/openstack-k8s-operators/watcher-operator:557991b31682102cc5465466dd6466fe516ca0b9\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" podUID="26066212-ab72-4450-b9b3-b08e6b43e333" Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.879574 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" event={"ID":"5e61a329-1ac1-4162-9d68-f3086ec3f16e","Type":"ContainerStarted","Data":"63b450da21533577157f0fd094c23cc7b82840efa6c8d7a5b738ba016d93cd48"} Jan 23 18:23:17 crc kubenswrapper[4688]: I0123 18:23:17.882676 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" event={"ID":"1232d539-d6e5-4aa6-ac00-36be9120b247","Type":"ContainerStarted","Data":"a10044549133707b17b87181ab8b4ae312cd073138f9b884bafecbc52f90631f"} Jan 23 18:23:18 crc kubenswrapper[4688]: I0123 18:23:18.104788 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:18 crc kubenswrapper[4688]: E0123 18:23:18.105063 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:18 crc kubenswrapper[4688]: E0123 18:23:18.105148 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert podName:cae5b14f-5f7e-477f-a17a-9ad3930c6862 nodeName:}" failed. No retries permitted until 2026-01-23 18:23:22.105122188 +0000 UTC m=+997.100946629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert") pod "infra-operator-controller-manager-58749ffdfb-q4wv8" (UID: "cae5b14f-5f7e-477f-a17a-9ad3930c6862") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:18 crc kubenswrapper[4688]: I0123 18:23:18.616134 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:18 crc kubenswrapper[4688]: E0123 18:23:18.616429 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:18 crc kubenswrapper[4688]: E0123 18:23:18.616509 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert podName:af851c54-521b-4a32-95fd-df9fd55d2eee nodeName:}" failed. No retries permitted until 2026-01-23 18:23:22.61648478 +0000 UTC m=+997.612309221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" (UID: "af851c54-521b-4a32-95fd-df9fd55d2eee") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:18 crc kubenswrapper[4688]: E0123 18:23:18.898426 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.35:5001/openstack-k8s-operators/watcher-operator:557991b31682102cc5465466dd6466fe516ca0b9\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" podUID="26066212-ab72-4450-b9b3-b08e6b43e333" Jan 23 18:23:18 crc kubenswrapper[4688]: E0123 18:23:18.899063 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" podUID="b058c042-b4f7-4470-82ec-4f5336b47992" Jan 23 18:23:18 crc kubenswrapper[4688]: E0123 18:23:18.899122 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" podUID="f53bddcc-3d14-4066-980c-dcfa14f2965e" Jan 23 18:23:19 crc kubenswrapper[4688]: I0123 18:23:19.022488 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:19 crc kubenswrapper[4688]: I0123 18:23:19.022670 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:19 crc kubenswrapper[4688]: E0123 18:23:19.022859 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:23:19 crc kubenswrapper[4688]: E0123 18:23:19.022945 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:23.02291804 +0000 UTC m=+998.018742481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "webhook-server-cert" not found Jan 23 18:23:19 crc kubenswrapper[4688]: E0123 18:23:19.023478 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:23:19 crc kubenswrapper[4688]: E0123 18:23:19.023511 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:23.023500914 +0000 UTC m=+998.019325355 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "metrics-server-cert" not found Jan 23 18:23:22 crc kubenswrapper[4688]: I0123 18:23:22.114691 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:22 crc kubenswrapper[4688]: E0123 18:23:22.114980 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:22 crc kubenswrapper[4688]: E0123 18:23:22.115427 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert podName:cae5b14f-5f7e-477f-a17a-9ad3930c6862 nodeName:}" failed. No retries permitted until 2026-01-23 18:23:30.115394888 +0000 UTC m=+1005.111219329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert") pod "infra-operator-controller-manager-58749ffdfb-q4wv8" (UID: "cae5b14f-5f7e-477f-a17a-9ad3930c6862") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:22 crc kubenswrapper[4688]: I0123 18:23:22.623647 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:22 crc kubenswrapper[4688]: E0123 18:23:22.623890 4688 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:22 crc kubenswrapper[4688]: E0123 18:23:22.623979 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert podName:af851c54-521b-4a32-95fd-df9fd55d2eee nodeName:}" failed. No retries permitted until 2026-01-23 18:23:30.623951902 +0000 UTC m=+1005.619776343 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" (UID: "af851c54-521b-4a32-95fd-df9fd55d2eee") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 18:23:23 crc kubenswrapper[4688]: I0123 18:23:23.034158 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:23 crc kubenswrapper[4688]: E0123 18:23:23.034395 4688 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 18:23:23 crc kubenswrapper[4688]: I0123 18:23:23.034705 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:23 crc kubenswrapper[4688]: E0123 18:23:23.034909 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:31.03485931 +0000 UTC m=+1006.030683751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "metrics-server-cert" not found Jan 23 18:23:23 crc kubenswrapper[4688]: E0123 18:23:23.035040 4688 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 18:23:23 crc kubenswrapper[4688]: E0123 18:23:23.035202 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs podName:d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc nodeName:}" failed. No retries permitted until 2026-01-23 18:23:31.035163427 +0000 UTC m=+1006.030987858 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs") pod "openstack-operator-controller-manager-59bd4c58c8-qlfvx" (UID: "d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc") : secret "webhook-server-cert" not found Jan 23 18:23:30 crc kubenswrapper[4688]: I0123 18:23:30.208900 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:30 crc kubenswrapper[4688]: E0123 18:23:30.209202 4688 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:30 crc kubenswrapper[4688]: E0123 18:23:30.210020 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert podName:cae5b14f-5f7e-477f-a17a-9ad3930c6862 nodeName:}" failed. No retries permitted until 2026-01-23 18:23:46.209995145 +0000 UTC m=+1021.205819586 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert") pod "infra-operator-controller-manager-58749ffdfb-q4wv8" (UID: "cae5b14f-5f7e-477f-a17a-9ad3930c6862") : secret "infra-operator-webhook-server-cert" not found Jan 23 18:23:30 crc kubenswrapper[4688]: I0123 18:23:30.721273 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:30 crc kubenswrapper[4688]: I0123 18:23:30.735318 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/af851c54-521b-4a32-95fd-df9fd55d2eee-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854s7w97\" (UID: \"af851c54-521b-4a32-95fd-df9fd55d2eee\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:30 crc kubenswrapper[4688]: I0123 18:23:30.844559 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:23:31 crc kubenswrapper[4688]: I0123 18:23:31.129650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:31 crc kubenswrapper[4688]: I0123 18:23:31.129784 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:31 crc kubenswrapper[4688]: I0123 18:23:31.134377 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-metrics-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:31 crc kubenswrapper[4688]: I0123 18:23:31.140296 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc-webhook-certs\") pod \"openstack-operator-controller-manager-59bd4c58c8-qlfvx\" (UID: \"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc\") " pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:31 crc kubenswrapper[4688]: I0123 18:23:31.341301 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:32 crc kubenswrapper[4688]: E0123 18:23:32.026806 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 23 18:23:32 crc kubenswrapper[4688]: E0123 18:23:32.027664 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rk6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-mq2kk_openstack-operators(1232d539-d6e5-4aa6-ac00-36be9120b247): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:32 crc kubenswrapper[4688]: E0123 18:23:32.028900 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" podUID="1232d539-d6e5-4aa6-ac00-36be9120b247" Jan 23 18:23:32 crc kubenswrapper[4688]: E0123 18:23:32.100630 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" podUID="1232d539-d6e5-4aa6-ac00-36be9120b247" Jan 23 18:23:32 crc kubenswrapper[4688]: E0123 18:23:32.696026 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 23 18:23:32 crc kubenswrapper[4688]: E0123 18:23:32.696479 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvqcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-47x6q_openstack-operators(5e61a329-1ac1-4162-9d68-f3086ec3f16e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:32 crc kubenswrapper[4688]: E0123 18:23:32.697774 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" podUID="5e61a329-1ac1-4162-9d68-f3086ec3f16e" Jan 23 18:23:33 crc kubenswrapper[4688]: E0123 18:23:33.107263 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" podUID="5e61a329-1ac1-4162-9d68-f3086ec3f16e" Jan 23 18:23:33 crc kubenswrapper[4688]: E0123 18:23:33.363201 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 23 18:23:33 crc kubenswrapper[4688]: E0123 18:23:33.363537 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6j94c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-l59kj_openstack-operators(6e8fb123-6d73-47c6-9d23-930c6ba3de69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:33 crc kubenswrapper[4688]: E0123 18:23:33.364770 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" podUID="6e8fb123-6d73-47c6-9d23-930c6ba3de69" Jan 23 18:23:34 crc kubenswrapper[4688]: E0123 18:23:34.128675 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" podUID="6e8fb123-6d73-47c6-9d23-930c6ba3de69" Jan 23 18:23:34 crc kubenswrapper[4688]: E0123 18:23:34.524639 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 23 18:23:34 crc kubenswrapper[4688]: E0123 18:23:34.524925 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qbttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-q6tnb_openstack-operators(6daaa808-ea3a-43fb-bff1-285cf870df77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:34 crc kubenswrapper[4688]: E0123 18:23:34.526168 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" podUID="6daaa808-ea3a-43fb-bff1-285cf870df77" Jan 23 18:23:35 crc kubenswrapper[4688]: E0123 18:23:35.132734 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" podUID="6daaa808-ea3a-43fb-bff1-285cf870df77" Jan 23 18:23:36 crc kubenswrapper[4688]: E0123 18:23:36.826230 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f" Jan 23 18:23:36 crc kubenswrapper[4688]: E0123 18:23:36.826977 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8h9l8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-69cf5d4557-rmt2k_openstack-operators(9c6839a5-f543-42e6-8c94-7138c1200112): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:36 crc kubenswrapper[4688]: E0123 18:23:36.828859 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" podUID="9c6839a5-f543-42e6-8c94-7138c1200112" Jan 23 18:23:36 crc kubenswrapper[4688]: I0123 18:23:36.965603 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:23:36 crc kubenswrapper[4688]: I0123 18:23:36.965684 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:23:37 crc kubenswrapper[4688]: E0123 18:23:37.159257 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" podUID="9c6839a5-f543-42e6-8c94-7138c1200112" Jan 23 18:23:37 crc kubenswrapper[4688]: E0123 18:23:37.700540 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 23 18:23:37 crc kubenswrapper[4688]: E0123 18:23:37.700831 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6zpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-v4qgl_openstack-operators(be846838-ce35-4c14-a0ea-3a501d4ef6ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:37 crc kubenswrapper[4688]: E0123 18:23:37.702054 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" podUID="be846838-ce35-4c14-a0ea-3a501d4ef6ac" Jan 23 18:23:38 crc kubenswrapper[4688]: E0123 18:23:38.169793 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" podUID="be846838-ce35-4c14-a0ea-3a501d4ef6ac" Jan 23 18:23:38 crc kubenswrapper[4688]: E0123 18:23:38.908896 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 18:23:38 crc kubenswrapper[4688]: E0123 18:23:38.909322 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6d2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-k6hng_openstack-operators(55bb8a6a-0401-4cdc-92fb-595c5eeb5e55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:38 crc kubenswrapper[4688]: E0123 18:23:38.910598 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" podUID="55bb8a6a-0401-4cdc-92fb-595c5eeb5e55" Jan 23 18:23:39 crc kubenswrapper[4688]: E0123 18:23:39.176986 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" podUID="55bb8a6a-0401-4cdc-92fb-595c5eeb5e55" Jan 23 18:23:39 crc kubenswrapper[4688]: E0123 18:23:39.654058 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 23 18:23:39 crc kubenswrapper[4688]: E0123 18:23:39.655104 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmshf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6_openstack-operators(4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:39 crc kubenswrapper[4688]: E0123 18:23:39.656323 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" podUID="4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f" Jan 23 18:23:40 crc kubenswrapper[4688]: E0123 18:23:40.189459 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" podUID="4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f" Jan 23 18:23:46 crc kubenswrapper[4688]: I0123 18:23:46.273952 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:46 crc kubenswrapper[4688]: I0123 18:23:46.289730 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cae5b14f-5f7e-477f-a17a-9ad3930c6862-cert\") pod \"infra-operator-controller-manager-58749ffdfb-q4wv8\" (UID: \"cae5b14f-5f7e-477f-a17a-9ad3930c6862\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:46 crc kubenswrapper[4688]: I0123 18:23:46.438364 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ld764" Jan 23 18:23:46 crc kubenswrapper[4688]: I0123 18:23:46.445754 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:23:51 crc kubenswrapper[4688]: E0123 18:23:51.342928 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 23 18:23:51 crc kubenswrapper[4688]: E0123 18:23:51.344177 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lftxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-zk9c9_openstack-operators(f53bddcc-3d14-4066-980c-dcfa14f2965e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:51 crc kubenswrapper[4688]: E0123 18:23:51.346756 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" podUID="f53bddcc-3d14-4066-980c-dcfa14f2965e" Jan 23 18:23:54 crc kubenswrapper[4688]: E0123 18:23:54.427346 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 18:23:54 crc kubenswrapper[4688]: E0123 18:23:54.428086 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nrz5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qlqcd_openstack-operators(8d9bd4af-849d-417f-9bbd-8e661b88d557): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:54 crc kubenswrapper[4688]: E0123 18:23:54.429281 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" podUID="8d9bd4af-849d-417f-9bbd-8e661b88d557" Jan 23 18:23:55 crc kubenswrapper[4688]: E0123 18:23:55.297751 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" podUID="8d9bd4af-849d-417f-9bbd-8e661b88d557" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.174834 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.175764 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nfz4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-9p6ps_openstack-operators(b058c042-b4f7-4470-82ec-4f5336b47992): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.176917 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" podUID="b058c042-b4f7-4470-82ec-4f5336b47992" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.702904 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.703150 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ndhxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-kjh92_openstack-operators(b0ecc6d1-2625-4fba-860a-3931984ec27a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.704369 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" podUID="b0ecc6d1-2625-4fba-860a-3931984ec27a" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.784809 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.35:5001/openstack-k8s-operators/watcher-operator:557991b31682102cc5465466dd6466fe516ca0b9" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.784909 4688 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.35:5001/openstack-k8s-operators/watcher-operator:557991b31682102cc5465466dd6466fe516ca0b9" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.785178 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.35:5001/openstack-k8s-operators/watcher-operator:557991b31682102cc5465466dd6466fe516ca0b9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrgnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-679dc965c9-qrkxl_openstack-operators(26066212-ab72-4450-b9b3-b08e6b43e333): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:23:56 crc kubenswrapper[4688]: E0123 18:23:56.788352 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" podUID="26066212-ab72-4450-b9b3-b08e6b43e333" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.217720 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8"] Jan 23 18:23:57 crc kubenswrapper[4688]: W0123 18:23:57.275708 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcae5b14f_5f7e_477f_a17a_9ad3930c6862.slice/crio-264ca474b994e79c2dccab625fca04f4ff55a5f3f007534047a8929ebd979301 WatchSource:0}: Error finding container 264ca474b994e79c2dccab625fca04f4ff55a5f3f007534047a8929ebd979301: Status 404 returned error can't find the container with id 264ca474b994e79c2dccab625fca04f4ff55a5f3f007534047a8929ebd979301 Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.315974 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" event={"ID":"cae5b14f-5f7e-477f-a17a-9ad3930c6862","Type":"ContainerStarted","Data":"264ca474b994e79c2dccab625fca04f4ff55a5f3f007534047a8929ebd979301"} Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.320064 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" event={"ID":"e53011a2-ea48-49f2-afbc-0d4bf71ae725","Type":"ContainerStarted","Data":"58b86878ea6792c16149f2b9755b3f61aade520ac00d19977f5e825289ff0555"} Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.320114 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.328117 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" event={"ID":"9ac53122-55ee-4db4-ad7c-8369e5117efe","Type":"ContainerStarted","Data":"af7da44d79d45c644ed4ed3c6552a77eb516f8979c001537a8ea560c6e8de801"} Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.329278 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.331970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" event={"ID":"30cd4339-ab66-45e3-937d-b3d9b5c3ef62","Type":"ContainerStarted","Data":"ae861270d63e5098c213d4ba5a779b32c8c477e2c5d9bbc72c543dcb2f2aa96c"} Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.332252 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.335426 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" event={"ID":"bd62301c-d101-483c-8fe3-a1a5eddee7fc","Type":"ContainerStarted","Data":"f53ef075c43482e6c268cf40c3411b7131a0156b1088fe42fa1e471a67540aa9"} Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.335920 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" Jan 23 18:23:57 crc kubenswrapper[4688]: E0123 18:23:57.337527 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" podUID="b0ecc6d1-2625-4fba-860a-3931984ec27a" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.356240 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" podStartSLOduration=4.23968653 podStartE2EDuration="43.356208548s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:15.256511795 +0000 UTC m=+990.252336236" lastFinishedPulling="2026-01-23 18:23:54.373033803 +0000 UTC m=+1029.368858254" observedRunningTime="2026-01-23 18:23:57.35090273 +0000 UTC m=+1032.346727191" watchObservedRunningTime="2026-01-23 18:23:57.356208548 +0000 UTC m=+1032.352032989" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.411365 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" podStartSLOduration=4.885208987 podStartE2EDuration="43.411334049s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.362077079 +0000 UTC m=+991.357901510" lastFinishedPulling="2026-01-23 18:23:54.888202131 +0000 UTC m=+1029.884026572" observedRunningTime="2026-01-23 18:23:57.383085328 +0000 UTC m=+1032.378909769" watchObservedRunningTime="2026-01-23 18:23:57.411334049 +0000 UTC m=+1032.407158490" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.419938 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97"] Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.448987 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" podStartSLOduration=5.268306286 podStartE2EDuration="44.44895932s" podCreationTimestamp="2026-01-23 18:23:13 +0000 UTC" firstStartedPulling="2026-01-23 18:23:15.708281779 +0000 UTC m=+990.704106220" lastFinishedPulling="2026-01-23 18:23:54.888934813 +0000 UTC m=+1029.884759254" observedRunningTime="2026-01-23 18:23:57.437634672 +0000 UTC m=+1032.433459133" watchObservedRunningTime="2026-01-23 18:23:57.44895932 +0000 UTC m=+1032.444783761" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.471564 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" podStartSLOduration=4.003773631 podStartE2EDuration="44.471528182s" podCreationTimestamp="2026-01-23 18:23:13 +0000 UTC" firstStartedPulling="2026-01-23 18:23:15.118827032 +0000 UTC m=+990.114651473" lastFinishedPulling="2026-01-23 18:23:55.586581583 +0000 UTC m=+1030.582406024" observedRunningTime="2026-01-23 18:23:57.45738308 +0000 UTC m=+1032.453207551" watchObservedRunningTime="2026-01-23 18:23:57.471528182 +0000 UTC m=+1032.467352643" Jan 23 18:23:57 crc kubenswrapper[4688]: I0123 18:23:57.527746 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx"] Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.392689 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" event={"ID":"5e61a329-1ac1-4162-9d68-f3086ec3f16e","Type":"ContainerStarted","Data":"6da24642076f859dae581d8ce06be0796841a0e3bbcde024c9ef02c83410121f"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.393602 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.417787 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" event={"ID":"9c6839a5-f543-42e6-8c94-7138c1200112","Type":"ContainerStarted","Data":"d556649fa569dd2dbd48dd59af4f730938c338f91c2dbc61847b844f75663c74"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.418267 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.451712 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" event={"ID":"af851c54-521b-4a32-95fd-df9fd55d2eee","Type":"ContainerStarted","Data":"4e522e64a510bd0cc81e48449ceefa642a5d2752fe23ff2855b67fdd857c57bf"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.469532 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" event={"ID":"4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f","Type":"ContainerStarted","Data":"d6134fb5be40d513025a95b0e0b2c9679014e0304811e1244767e5f2b7422eb6"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.470809 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.483232 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" event={"ID":"e9c016a5-4953-4944-9f6e-f086e5a70918","Type":"ContainerStarted","Data":"fdface11124e540996ac498aaeaabb9c0a5c75503a13d23ee75a340802ee9f24"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.484799 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.491420 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" event={"ID":"55bb8a6a-0401-4cdc-92fb-595c5eeb5e55","Type":"ContainerStarted","Data":"4f7eba67e6a46bae415309b3f1b8e9903e6308c2184e79e16c30c46db8e40a96"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.492849 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.499253 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" event={"ID":"676572f9-6a9f-4a4e-ae4c-8d8d300bf02a","Type":"ContainerStarted","Data":"89905f75e2e7dab96df40be40b5afcfc1e593b17c918ed43a7b857195214cadf"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.500327 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.520829 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" podStartSLOduration=4.436115777 podStartE2EDuration="44.52079278s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.766583842 +0000 UTC m=+991.762408283" lastFinishedPulling="2026-01-23 18:23:56.851260845 +0000 UTC m=+1031.847085286" observedRunningTime="2026-01-23 18:23:58.446932481 +0000 UTC m=+1033.442756922" watchObservedRunningTime="2026-01-23 18:23:58.52079278 +0000 UTC m=+1033.516617231" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.521761 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" podStartSLOduration=4.951397092 podStartE2EDuration="45.521752939s" podCreationTimestamp="2026-01-23 18:23:13 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.290177734 +0000 UTC m=+991.286002165" lastFinishedPulling="2026-01-23 18:23:56.860533571 +0000 UTC m=+1031.856358012" observedRunningTime="2026-01-23 18:23:58.517591655 +0000 UTC m=+1033.513416096" watchObservedRunningTime="2026-01-23 18:23:58.521752939 +0000 UTC m=+1033.517577380" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.527641 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" event={"ID":"6e8fb123-6d73-47c6-9d23-930c6ba3de69","Type":"ContainerStarted","Data":"3e13d5829b112bd982479666997968b5838e112be8887bb40eb4fdc3bb611dc6"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.528801 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.543094 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" event={"ID":"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc","Type":"ContainerStarted","Data":"6e14b25fd17bc6892d32f87c5ab8678b3f2eeabe1a4b237fc2734721ef429340"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.543173 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" event={"ID":"d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc","Type":"ContainerStarted","Data":"2b05e50626be3bf2d7495cbb9473940a084047b169dee649466571232af17699"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.544119 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.582004 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" event={"ID":"1232d539-d6e5-4aa6-ac00-36be9120b247","Type":"ContainerStarted","Data":"cd7a5b7fa13f59eccbed71f3da1721d5557e55cc2d341d27e27e18216e99a443"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.583133 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.586746 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" podStartSLOduration=6.426407908 podStartE2EDuration="44.586705013s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.728484653 +0000 UTC m=+991.724309094" lastFinishedPulling="2026-01-23 18:23:54.888781748 +0000 UTC m=+1029.884606199" observedRunningTime="2026-01-23 18:23:58.577723575 +0000 UTC m=+1033.573548026" watchObservedRunningTime="2026-01-23 18:23:58.586705013 +0000 UTC m=+1033.582529454" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.601508 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" event={"ID":"f277821c-c358-4283-ad35-61b187fb0878","Type":"ContainerStarted","Data":"f9b0fac9381e9a3f3c585c59533add6220e94d697d3cec2eb9a9efc81fca92cf"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.602699 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.620518 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" event={"ID":"6daaa808-ea3a-43fb-bff1-285cf870df77","Type":"ContainerStarted","Data":"8f0fd7e316a7d4a39870bd6888def6148d75981c02097abaf7e6a10b4e84d43b"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.621687 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.638978 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" event={"ID":"be846838-ce35-4c14-a0ea-3a501d4ef6ac","Type":"ContainerStarted","Data":"3f87c6601e43134c7b7a81fb130aac71d835634f1fd82520aaf9d52fcb4ac73d"} Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.639545 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.656215 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" podStartSLOduration=4.4773264170000004 podStartE2EDuration="44.65615712s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.707456615 +0000 UTC m=+991.703281056" lastFinishedPulling="2026-01-23 18:23:56.886287328 +0000 UTC m=+1031.882111759" observedRunningTime="2026-01-23 18:23:58.651973096 +0000 UTC m=+1033.647797537" watchObservedRunningTime="2026-01-23 18:23:58.65615712 +0000 UTC m=+1033.651981571" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.781347 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" podStartSLOduration=4.491772955 podStartE2EDuration="44.781312347s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.612161015 +0000 UTC m=+991.607985456" lastFinishedPulling="2026-01-23 18:23:56.901700407 +0000 UTC m=+1031.897524848" observedRunningTime="2026-01-23 18:23:58.736872393 +0000 UTC m=+1033.732696844" watchObservedRunningTime="2026-01-23 18:23:58.781312347 +0000 UTC m=+1033.777136808" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.781857 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" podStartSLOduration=7.172231382 podStartE2EDuration="45.781850333s" podCreationTimestamp="2026-01-23 18:23:13 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.27924073 +0000 UTC m=+991.275065181" lastFinishedPulling="2026-01-23 18:23:54.888859691 +0000 UTC m=+1029.884684132" observedRunningTime="2026-01-23 18:23:58.781799871 +0000 UTC m=+1033.777624302" watchObservedRunningTime="2026-01-23 18:23:58.781850333 +0000 UTC m=+1033.777674774" Jan 23 18:23:58 crc kubenswrapper[4688]: I0123 18:23:58.840798 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" podStartSLOduration=4.696394425 podStartE2EDuration="44.840765407s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.706637935 +0000 UTC m=+991.702462376" lastFinishedPulling="2026-01-23 18:23:56.851008917 +0000 UTC m=+1031.846833358" observedRunningTime="2026-01-23 18:23:58.836787328 +0000 UTC m=+1033.832611789" watchObservedRunningTime="2026-01-23 18:23:58.840765407 +0000 UTC m=+1033.836589848" Jan 23 18:23:59 crc kubenswrapper[4688]: I0123 18:23:59.017777 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" podStartSLOduration=4.895136459 podStartE2EDuration="45.017747716s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.726681189 +0000 UTC m=+991.722505630" lastFinishedPulling="2026-01-23 18:23:56.849292446 +0000 UTC m=+1031.845116887" observedRunningTime="2026-01-23 18:23:59.005337356 +0000 UTC m=+1034.001161807" watchObservedRunningTime="2026-01-23 18:23:59.017747716 +0000 UTC m=+1034.013572157" Jan 23 18:23:59 crc kubenswrapper[4688]: I0123 18:23:59.018795 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" podStartSLOduration=5.4460259650000005 podStartE2EDuration="46.018786007s" podCreationTimestamp="2026-01-23 18:23:13 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.27841269 +0000 UTC m=+991.274237131" lastFinishedPulling="2026-01-23 18:23:56.851172742 +0000 UTC m=+1031.846997173" observedRunningTime="2026-01-23 18:23:58.914053859 +0000 UTC m=+1033.909878300" watchObservedRunningTime="2026-01-23 18:23:59.018786007 +0000 UTC m=+1034.014610448" Jan 23 18:23:59 crc kubenswrapper[4688]: I0123 18:23:59.141636 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" podStartSLOduration=45.141609673 podStartE2EDuration="45.141609673s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:23:59.137881022 +0000 UTC m=+1034.133705493" watchObservedRunningTime="2026-01-23 18:23:59.141609673 +0000 UTC m=+1034.137434114" Jan 23 18:23:59 crc kubenswrapper[4688]: I0123 18:23:59.147086 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" podStartSLOduration=5.391734078 podStartE2EDuration="45.147056436s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.926213854 +0000 UTC m=+991.922038295" lastFinishedPulling="2026-01-23 18:23:56.681536212 +0000 UTC m=+1031.677360653" observedRunningTime="2026-01-23 18:23:59.051304435 +0000 UTC m=+1034.047128886" watchObservedRunningTime="2026-01-23 18:23:59.147056436 +0000 UTC m=+1034.142880877" Jan 23 18:24:03 crc kubenswrapper[4688]: I0123 18:24:03.706517 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" event={"ID":"cae5b14f-5f7e-477f-a17a-9ad3930c6862","Type":"ContainerStarted","Data":"4d8f36b0b0b21667c6c882fbe88ceef0569596faff3fb9b1123da96f29151f78"} Jan 23 18:24:03 crc kubenswrapper[4688]: I0123 18:24:03.707504 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:24:03 crc kubenswrapper[4688]: I0123 18:24:03.708751 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" event={"ID":"af851c54-521b-4a32-95fd-df9fd55d2eee","Type":"ContainerStarted","Data":"98d00410a5be48ab34f17b6e5ced024287515db8d8672fde93904a5a86dd782e"} Jan 23 18:24:03 crc kubenswrapper[4688]: I0123 18:24:03.708959 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:24:03 crc kubenswrapper[4688]: I0123 18:24:03.734779 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" podStartSLOduration=9.972181939 podStartE2EDuration="49.73474789s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:17.097795535 +0000 UTC m=+992.093619976" lastFinishedPulling="2026-01-23 18:23:56.860361486 +0000 UTC m=+1031.856185927" observedRunningTime="2026-01-23 18:23:59.182301375 +0000 UTC m=+1034.178125826" watchObservedRunningTime="2026-01-23 18:24:03.73474789 +0000 UTC m=+1038.730572331" Jan 23 18:24:03 crc kubenswrapper[4688]: I0123 18:24:03.738940 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" podStartSLOduration=44.299857602 podStartE2EDuration="49.738914894s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:57.289872493 +0000 UTC m=+1032.285696934" lastFinishedPulling="2026-01-23 18:24:02.728929785 +0000 UTC m=+1037.724754226" observedRunningTime="2026-01-23 18:24:03.730116922 +0000 UTC m=+1038.725941363" watchObservedRunningTime="2026-01-23 18:24:03.738914894 +0000 UTC m=+1038.734739335" Jan 23 18:24:03 crc kubenswrapper[4688]: I0123 18:24:03.778178 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" podStartSLOduration=44.532451916 podStartE2EDuration="49.778140311s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:57.47584605 +0000 UTC m=+1032.471670491" lastFinishedPulling="2026-01-23 18:24:02.721534445 +0000 UTC m=+1037.717358886" observedRunningTime="2026-01-23 18:24:03.770817103 +0000 UTC m=+1038.766641564" watchObservedRunningTime="2026-01-23 18:24:03.778140311 +0000 UTC m=+1038.773964762" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.337579 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-q56fh" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.506176 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wt2bv" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.511942 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-2qzlh" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.540401 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-rmt2k" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.698130 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-ztl8x" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.698302 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-v4qgl" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.707546 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-wz5qj" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.818237 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-q6tnb" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.863385 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6" Jan 23 18:24:04 crc kubenswrapper[4688]: I0123 18:24:04.918026 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-47x6q" Jan 23 18:24:05 crc kubenswrapper[4688]: I0123 18:24:05.011993 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-k2g2j" Jan 23 18:24:05 crc kubenswrapper[4688]: I0123 18:24:05.212205 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-mq2kk" Jan 23 18:24:05 crc kubenswrapper[4688]: I0123 18:24:05.213263 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-6xgwb" Jan 23 18:24:05 crc kubenswrapper[4688]: I0123 18:24:05.229025 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-k6hng" Jan 23 18:24:05 crc kubenswrapper[4688]: I0123 18:24:05.506353 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-l59kj" Jan 23 18:24:06 crc kubenswrapper[4688]: E0123 18:24:06.360154 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" podUID="f53bddcc-3d14-4066-980c-dcfa14f2965e" Jan 23 18:24:06 crc kubenswrapper[4688]: I0123 18:24:06.965408 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:24:06 crc kubenswrapper[4688]: I0123 18:24:06.965522 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:24:08 crc kubenswrapper[4688]: I0123 18:24:08.816138 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" event={"ID":"8d9bd4af-849d-417f-9bbd-8e661b88d557","Type":"ContainerStarted","Data":"cb2c9ec6c85839fb98801eed376837b8a943560f67718848bb67d9adaa1e850f"} Jan 23 18:24:08 crc kubenswrapper[4688]: I0123 18:24:08.839057 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qlqcd" podStartSLOduration=3.908467511 podStartE2EDuration="54.839031648s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:17.096092294 +0000 UTC m=+992.091916735" lastFinishedPulling="2026-01-23 18:24:08.026656431 +0000 UTC m=+1043.022480872" observedRunningTime="2026-01-23 18:24:08.834852265 +0000 UTC m=+1043.830676716" watchObservedRunningTime="2026-01-23 18:24:08.839031648 +0000 UTC m=+1043.834856089" Jan 23 18:24:10 crc kubenswrapper[4688]: E0123 18:24:10.358555 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" podUID="b058c042-b4f7-4470-82ec-4f5336b47992" Jan 23 18:24:10 crc kubenswrapper[4688]: I0123 18:24:10.852413 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854s7w97" Jan 23 18:24:11 crc kubenswrapper[4688]: I0123 18:24:11.348891 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-59bd4c58c8-qlfvx" Jan 23 18:24:11 crc kubenswrapper[4688]: E0123 18:24:11.359660 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.35:5001/openstack-k8s-operators/watcher-operator:557991b31682102cc5465466dd6466fe516ca0b9\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" podUID="26066212-ab72-4450-b9b3-b08e6b43e333" Jan 23 18:24:15 crc kubenswrapper[4688]: I0123 18:24:15.878227 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" event={"ID":"b0ecc6d1-2625-4fba-860a-3931984ec27a","Type":"ContainerStarted","Data":"1df82210ff1e40210708258f0178365b8b9ffda4c33f16ac98acc9804eab49e4"} Jan 23 18:24:15 crc kubenswrapper[4688]: I0123 18:24:15.880116 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" Jan 23 18:24:15 crc kubenswrapper[4688]: I0123 18:24:15.898870 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" podStartSLOduration=3.5705806730000003 podStartE2EDuration="1m1.898837694s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:16.36502242 +0000 UTC m=+991.360846861" lastFinishedPulling="2026-01-23 18:24:14.693279441 +0000 UTC m=+1049.689103882" observedRunningTime="2026-01-23 18:24:15.897165905 +0000 UTC m=+1050.892990356" watchObservedRunningTime="2026-01-23 18:24:15.898837694 +0000 UTC m=+1050.894662155" Jan 23 18:24:16 crc kubenswrapper[4688]: I0123 18:24:16.451319 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-q4wv8" Jan 23 18:24:21 crc kubenswrapper[4688]: I0123 18:24:21.931602 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" event={"ID":"f53bddcc-3d14-4066-980c-dcfa14f2965e","Type":"ContainerStarted","Data":"88b1fa9814a693e06483ed1faa7fdd6473e4b81459b9b64250dc793fbc79ee35"} Jan 23 18:24:21 crc kubenswrapper[4688]: I0123 18:24:21.932798 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" Jan 23 18:24:22 crc kubenswrapper[4688]: I0123 18:24:22.942224 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" event={"ID":"b058c042-b4f7-4470-82ec-4f5336b47992","Type":"ContainerStarted","Data":"07fcd99ee2c0aca35d888a375737fb6ae4c0fe60801b17d7793774083fa08248"} Jan 23 18:24:22 crc kubenswrapper[4688]: I0123 18:24:22.942882 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" Jan 23 18:24:22 crc kubenswrapper[4688]: I0123 18:24:22.969412 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" podStartSLOduration=5.273642999 podStartE2EDuration="1m8.969379895s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:17.14024253 +0000 UTC m=+992.136066971" lastFinishedPulling="2026-01-23 18:24:20.835979406 +0000 UTC m=+1055.831803867" observedRunningTime="2026-01-23 18:24:21.951103136 +0000 UTC m=+1056.946927577" watchObservedRunningTime="2026-01-23 18:24:22.969379895 +0000 UTC m=+1057.965204346" Jan 23 18:24:22 crc kubenswrapper[4688]: I0123 18:24:22.970288 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" podStartSLOduration=4.286220045 podStartE2EDuration="1m8.970277401s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:17.12948302 +0000 UTC m=+992.125307461" lastFinishedPulling="2026-01-23 18:24:21.813540376 +0000 UTC m=+1056.809364817" observedRunningTime="2026-01-23 18:24:22.960709549 +0000 UTC m=+1057.956533990" watchObservedRunningTime="2026-01-23 18:24:22.970277401 +0000 UTC m=+1057.966101852" Jan 23 18:24:23 crc kubenswrapper[4688]: I0123 18:24:23.954510 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" event={"ID":"26066212-ab72-4450-b9b3-b08e6b43e333","Type":"ContainerStarted","Data":"5c3f183663a3b4bc7875084c891d1c7edb4e0c7c86b72de65fdcc4dc47ee0b45"} Jan 23 18:24:23 crc kubenswrapper[4688]: I0123 18:24:23.954930 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" Jan 23 18:24:23 crc kubenswrapper[4688]: I0123 18:24:23.985630 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" podStartSLOduration=3.690772874 podStartE2EDuration="1m9.985600433s" podCreationTimestamp="2026-01-23 18:23:14 +0000 UTC" firstStartedPulling="2026-01-23 18:23:17.139083672 +0000 UTC m=+992.134908113" lastFinishedPulling="2026-01-23 18:24:23.433911231 +0000 UTC m=+1058.429735672" observedRunningTime="2026-01-23 18:24:23.976698351 +0000 UTC m=+1058.972522792" watchObservedRunningTime="2026-01-23 18:24:23.985600433 +0000 UTC m=+1058.981424874" Jan 23 18:24:24 crc kubenswrapper[4688]: I0123 18:24:24.784641 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kjh92" Jan 23 18:24:25 crc kubenswrapper[4688]: I0123 18:24:25.214127 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-zk9c9" Jan 23 18:24:38 crc kubenswrapper[4688]: I0123 18:24:35.251673 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9p6ps" Jan 23 18:24:38 crc kubenswrapper[4688]: I0123 18:24:35.471858 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-679dc965c9-qrkxl" Jan 23 18:24:38 crc kubenswrapper[4688]: I0123 18:24:36.965261 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:24:38 crc kubenswrapper[4688]: I0123 18:24:36.965355 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:24:38 crc kubenswrapper[4688]: I0123 18:24:36.965412 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:24:38 crc kubenswrapper[4688]: I0123 18:24:36.966115 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce2ee85d69f22a706875c0452ba1efb42e44916bb5588111fe1426c3ed55e5f2"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:24:38 crc kubenswrapper[4688]: I0123 18:24:36.966272 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://ce2ee85d69f22a706875c0452ba1efb42e44916bb5588111fe1426c3ed55e5f2" gracePeriod=600 Jan 23 18:24:39 crc kubenswrapper[4688]: I0123 18:24:39.107705 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="ce2ee85d69f22a706875c0452ba1efb42e44916bb5588111fe1426c3ed55e5f2" exitCode=0 Jan 23 18:24:39 crc kubenswrapper[4688]: I0123 18:24:39.107813 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"ce2ee85d69f22a706875c0452ba1efb42e44916bb5588111fe1426c3ed55e5f2"} Jan 23 18:24:39 crc kubenswrapper[4688]: I0123 18:24:39.108454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"c61421e0532a5bce13261538943da0f43d79b47405f6be50cfb642634fbe028e"} Jan 23 18:24:39 crc kubenswrapper[4688]: I0123 18:24:39.108499 4688 scope.go:117] "RemoveContainer" containerID="8c8c19ed1c7be125088def7ce3f0a64b978aa806db3742b6ac615e8c4bfd5bae" Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.801899 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7kk28"] Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.804267 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.806772 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.806962 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.807014 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wnwxf" Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.807777 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.824834 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7kk28"] Jan 23 18:24:52 crc kubenswrapper[4688]: I0123 18:24:52.961061 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725b1457-ea72-4a1a-9d1a-59db2c9894dc-config\") pod \"dnsmasq-dns-675f4bcbfc-7kk28\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:52.961391 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs9j\" (UniqueName: \"kubernetes.io/projected/725b1457-ea72-4a1a-9d1a-59db2c9894dc-kube-api-access-wjs9j\") pod \"dnsmasq-dns-675f4bcbfc-7kk28\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.031994 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5zvxl"] Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.034093 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.041982 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.050741 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5zvxl"] Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.102834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-config\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.102888 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjs9j\" (UniqueName: \"kubernetes.io/projected/725b1457-ea72-4a1a-9d1a-59db2c9894dc-kube-api-access-wjs9j\") pod \"dnsmasq-dns-675f4bcbfc-7kk28\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.102950 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725b1457-ea72-4a1a-9d1a-59db2c9894dc-config\") pod \"dnsmasq-dns-675f4bcbfc-7kk28\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.104098 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725b1457-ea72-4a1a-9d1a-59db2c9894dc-config\") pod \"dnsmasq-dns-675f4bcbfc-7kk28\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.104527 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.104724 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglp8\" (UniqueName: \"kubernetes.io/projected/5db6a74b-02de-4be4-b074-2f1e0002d74d-kube-api-access-sglp8\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.127236 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjs9j\" (UniqueName: \"kubernetes.io/projected/725b1457-ea72-4a1a-9d1a-59db2c9894dc-kube-api-access-wjs9j\") pod \"dnsmasq-dns-675f4bcbfc-7kk28\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.206487 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sglp8\" (UniqueName: \"kubernetes.io/projected/5db6a74b-02de-4be4-b074-2f1e0002d74d-kube-api-access-sglp8\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.206578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-config\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.206635 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.207483 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.207624 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-config\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.228138 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sglp8\" (UniqueName: \"kubernetes.io/projected/5db6a74b-02de-4be4-b074-2f1e0002d74d-kube-api-access-sglp8\") pod \"dnsmasq-dns-78dd6ddcc-5zvxl\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.360511 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.426213 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.809061 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5zvxl"] Jan 23 18:24:53 crc kubenswrapper[4688]: I0123 18:24:53.960735 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7kk28"] Jan 23 18:24:53 crc kubenswrapper[4688]: W0123 18:24:53.961473 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod725b1457_ea72_4a1a_9d1a_59db2c9894dc.slice/crio-f8ab76734b87cd7a641950669a67f6967369b9c8737d65eee660a538954c30f0 WatchSource:0}: Error finding container f8ab76734b87cd7a641950669a67f6967369b9c8737d65eee660a538954c30f0: Status 404 returned error can't find the container with id f8ab76734b87cd7a641950669a67f6967369b9c8737d65eee660a538954c30f0 Jan 23 18:24:54 crc kubenswrapper[4688]: I0123 18:24:54.259723 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" event={"ID":"725b1457-ea72-4a1a-9d1a-59db2c9894dc","Type":"ContainerStarted","Data":"f8ab76734b87cd7a641950669a67f6967369b9c8737d65eee660a538954c30f0"} Jan 23 18:24:54 crc kubenswrapper[4688]: I0123 18:24:54.261254 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" event={"ID":"5db6a74b-02de-4be4-b074-2f1e0002d74d","Type":"ContainerStarted","Data":"b3186b1c5dcdb8efbb7fe2741a34e16bea41d12280566360de1051c68cd394f0"} Jan 23 18:24:55 crc kubenswrapper[4688]: I0123 18:24:55.756672 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7kk28"] Jan 23 18:24:55 crc kubenswrapper[4688]: I0123 18:24:55.784053 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5nd8k"] Jan 23 18:24:55 crc kubenswrapper[4688]: I0123 18:24:55.785369 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:55 crc kubenswrapper[4688]: I0123 18:24:55.799971 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5nd8k"] Jan 23 18:24:55 crc kubenswrapper[4688]: I0123 18:24:55.924262 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-config\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:55 crc kubenswrapper[4688]: I0123 18:24:55.924328 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-588cc\" (UniqueName: \"kubernetes.io/projected/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-kube-api-access-588cc\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:55 crc kubenswrapper[4688]: I0123 18:24:55.924428 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.026696 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.026822 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-config\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.026862 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-588cc\" (UniqueName: \"kubernetes.io/projected/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-kube-api-access-588cc\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.028394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-dns-svc\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.028440 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-config\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.076582 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-588cc\" (UniqueName: \"kubernetes.io/projected/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-kube-api-access-588cc\") pod \"dnsmasq-dns-666b6646f7-5nd8k\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.105425 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.230489 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5zvxl"] Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.264162 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zw5fk"] Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.265724 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.278046 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zw5fk"] Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.434053 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.434154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-config\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.434283 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz79q\" (UniqueName: \"kubernetes.io/projected/f9bae02c-8813-4e0f-8781-7242cb10fd50-kube-api-access-cz79q\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.538052 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.538124 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-config\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.538288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz79q\" (UniqueName: \"kubernetes.io/projected/f9bae02c-8813-4e0f-8781-7242cb10fd50-kube-api-access-cz79q\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.539720 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.541045 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-config\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.588248 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz79q\" (UniqueName: \"kubernetes.io/projected/f9bae02c-8813-4e0f-8781-7242cb10fd50-kube-api-access-cz79q\") pod \"dnsmasq-dns-57d769cc4f-zw5fk\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.595560 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.954319 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.968028 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.973485 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.973712 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.973852 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.974851 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.975033 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gr9f8" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.975207 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.975398 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 18:24:56 crc kubenswrapper[4688]: I0123 18:24:56.989534 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5nd8k"] Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.010638 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141520 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d36723-6a61-470a-9107-e5e8cf1c49a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141685 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141722 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141757 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141799 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d36723-6a61-470a-9107-e5e8cf1c49a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141819 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141844 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lxcj\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-kube-api-access-8lxcj\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.141865 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.174854 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zw5fk"] Jan 23 18:24:57 crc kubenswrapper[4688]: W0123 18:24:57.199871 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9bae02c_8813_4e0f_8781_7242cb10fd50.slice/crio-cdedac1ed6e023d7f48b77a64fcd9db9c30b106db901948b6e85ac0beed9e7a9 WatchSource:0}: Error finding container cdedac1ed6e023d7f48b77a64fcd9db9c30b106db901948b6e85ac0beed9e7a9: Status 404 returned error can't find the container with id cdedac1ed6e023d7f48b77a64fcd9db9c30b106db901948b6e85ac0beed9e7a9 Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.245575 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.247676 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.247891 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.248426 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d36723-6a61-470a-9107-e5e8cf1c49a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.248475 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.248565 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lxcj\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-kube-api-access-8lxcj\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.248615 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.248694 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.248746 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.248779 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d36723-6a61-470a-9107-e5e8cf1c49a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.249030 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.249126 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.249153 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.249998 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.250055 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.257493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.265981 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d36723-6a61-470a-9107-e5e8cf1c49a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.271487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.271890 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.278786 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d36723-6a61-470a-9107-e5e8cf1c49a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.301063 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.331667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" event={"ID":"f9bae02c-8813-4e0f-8781-7242cb10fd50","Type":"ContainerStarted","Data":"cdedac1ed6e023d7f48b77a64fcd9db9c30b106db901948b6e85ac0beed9e7a9"} Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.333852 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.358040 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" event={"ID":"1bfd835d-1fb9-40e2-b28d-a081f287cfdb","Type":"ContainerStarted","Data":"90101393e56ee8e07d2a4f6bf2a7003b388b96ce4a861f466aa22df755026ee0"} Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.359341 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lxcj\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-kube-api-access-8lxcj\") pod \"rabbitmq-server-0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.451619 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.453599 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.461626 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.461820 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7nsc9" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.461840 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.461947 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.462035 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.462093 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.462170 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.469830 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562272 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562370 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bf89cbd-9a52-45b0-8e35-1e070a678aea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562410 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562460 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562490 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562540 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562564 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bf89cbd-9a52-45b0-8e35-1e070a678aea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562659 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562683 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562723 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgrxt\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-kube-api-access-sgrxt\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.562744 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.612847 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.666250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.666514 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.666573 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgrxt\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-kube-api-access-sgrxt\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.666604 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.666936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.666986 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bf89cbd-9a52-45b0-8e35-1e070a678aea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.667250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.667306 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.667551 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.667594 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.667627 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bf89cbd-9a52-45b0-8e35-1e070a678aea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.671175 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.673085 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.674031 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.674468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.681001 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.684044 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.685033 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.686308 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bf89cbd-9a52-45b0-8e35-1e070a678aea-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.693473 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.699108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgrxt\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-kube-api-access-sgrxt\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.755731 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bf89cbd-9a52-45b0-8e35-1e070a678aea-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:57 crc kubenswrapper[4688]: I0123 18:24:57.786393 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.091977 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.416575 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.728395 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.733300 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.745305 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.745543 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.746706 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.747415 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-l7fqs" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.749915 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.757121 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.898712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.898785 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c805a15-64d3-4320-940e-a6859affbf9c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.898818 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.898857 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c805a15-64d3-4320-940e-a6859affbf9c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.898892 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8vs\" (UniqueName: \"kubernetes.io/projected/4c805a15-64d3-4320-940e-a6859affbf9c-kube-api-access-qx8vs\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.899041 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.899074 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.899128 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c805a15-64d3-4320-940e-a6859affbf9c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:58 crc kubenswrapper[4688]: I0123 18:24:58.995747 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000567 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000632 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000691 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c805a15-64d3-4320-940e-a6859affbf9c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000787 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c805a15-64d3-4320-940e-a6859affbf9c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000816 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000857 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c805a15-64d3-4320-940e-a6859affbf9c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.000893 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8vs\" (UniqueName: \"kubernetes.io/projected/4c805a15-64d3-4320-940e-a6859affbf9c-kube-api-access-qx8vs\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.002210 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-kolla-config\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.003457 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-config-data-default\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.004452 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.008529 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c805a15-64d3-4320-940e-a6859affbf9c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.012008 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c805a15-64d3-4320-940e-a6859affbf9c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.012097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c805a15-64d3-4320-940e-a6859affbf9c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.016707 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c805a15-64d3-4320-940e-a6859affbf9c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.020019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8vs\" (UniqueName: \"kubernetes.io/projected/4c805a15-64d3-4320-940e-a6859affbf9c-kube-api-access-qx8vs\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.044706 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"4c805a15-64d3-4320-940e-a6859affbf9c\") " pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.080523 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.416612 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d36723-6a61-470a-9107-e5e8cf1c49a0","Type":"ContainerStarted","Data":"c74eae847a88311c1e84d30e458620da81dd8c5b868b70cb395d8d60e36f5e78"} Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.428728 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5bf89cbd-9a52-45b0-8e35-1e070a678aea","Type":"ContainerStarted","Data":"e5c5975039225e5a0bb9d22ed76aae1f72757e66485cc620bd433cc44f2fdd9e"} Jan 23 18:24:59 crc kubenswrapper[4688]: I0123 18:24:59.776949 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 18:24:59 crc kubenswrapper[4688]: W0123 18:24:59.801512 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c805a15_64d3_4320_940e_a6859affbf9c.slice/crio-ae6fe60ab1c09fbc80e6241eeb5773c592b75414bdcbd190bd7e84ec8cace09f WatchSource:0}: Error finding container ae6fe60ab1c09fbc80e6241eeb5773c592b75414bdcbd190bd7e84ec8cace09f: Status 404 returned error can't find the container with id ae6fe60ab1c09fbc80e6241eeb5773c592b75414bdcbd190bd7e84ec8cace09f Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.358689 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.378525 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.378677 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.387579 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9vl4j" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.388063 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.388201 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.388311 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.461058 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c805a15-64d3-4320-940e-a6859affbf9c","Type":"ContainerStarted","Data":"ae6fe60ab1c09fbc80e6241eeb5773c592b75414bdcbd190bd7e84ec8cace09f"} Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.548167 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.549753 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.556876 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.557052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/697e30b7-f8ce-45c0-8299-b6021b11a639-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.557077 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e30b7-f8ce-45c0-8299-b6021b11a639-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.557103 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.557144 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e30b7-f8ce-45c0-8299-b6021b11a639-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.557163 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.557177 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd5w\" (UniqueName: \"kubernetes.io/projected/697e30b7-f8ce-45c0-8299-b6021b11a639-kube-api-access-2gd5w\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.557230 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.565410 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.571773 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.572548 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-wmjbv" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.576823 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662064 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662132 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662284 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-kolla-config\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662354 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-config-data\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662417 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/697e30b7-f8ce-45c0-8299-b6021b11a639-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e30b7-f8ce-45c0-8299-b6021b11a639-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662588 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662665 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vl6\" (UniqueName: \"kubernetes.io/projected/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-kube-api-access-j5vl6\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662733 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e30b7-f8ce-45c0-8299-b6021b11a639-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662763 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.662973 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gd5w\" (UniqueName: \"kubernetes.io/projected/697e30b7-f8ce-45c0-8299-b6021b11a639-kube-api-access-2gd5w\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.663137 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/697e30b7-f8ce-45c0-8299-b6021b11a639-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.663739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.665026 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.665081 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.665701 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.672043 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697e30b7-f8ce-45c0-8299-b6021b11a639-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.687295 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697e30b7-f8ce-45c0-8299-b6021b11a639-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.701314 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/697e30b7-f8ce-45c0-8299-b6021b11a639-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.714063 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gd5w\" (UniqueName: \"kubernetes.io/projected/697e30b7-f8ce-45c0-8299-b6021b11a639-kube-api-access-2gd5w\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.721559 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697e30b7-f8ce-45c0-8299-b6021b11a639\") " pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.733561 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.777599 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.777708 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-kolla-config\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.777734 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.777768 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-config-data\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.777846 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vl6\" (UniqueName: \"kubernetes.io/projected/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-kube-api-access-j5vl6\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.781117 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-config-data\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.781338 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-kolla-config\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.786735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.789208 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.797994 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vl6\" (UniqueName: \"kubernetes.io/projected/1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c-kube-api-access-j5vl6\") pod \"memcached-0\" (UID: \"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c\") " pod="openstack/memcached-0" Jan 23 18:25:00 crc kubenswrapper[4688]: I0123 18:25:00.965850 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 18:25:01 crc kubenswrapper[4688]: I0123 18:25:01.797226 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.006140 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.511040 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c","Type":"ContainerStarted","Data":"6d2f301f3f5c894ca1db98603769577014c3f3dc6bb4aa4e555c500ba43226e9"} Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.556331 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697e30b7-f8ce-45c0-8299-b6021b11a639","Type":"ContainerStarted","Data":"dac379e79753a3e291119a59cd238c8edb2d1f728c6c1461df3b379ccac9929e"} Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.808785 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.810006 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.822168 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ckfrq" Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.829358 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:25:02 crc kubenswrapper[4688]: I0123 18:25:02.956069 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czjf\" (UniqueName: \"kubernetes.io/projected/2592fa6b-08d5-4d04-bc61-aa69d8aeef52-kube-api-access-7czjf\") pod \"kube-state-metrics-0\" (UID: \"2592fa6b-08d5-4d04-bc61-aa69d8aeef52\") " pod="openstack/kube-state-metrics-0" Jan 23 18:25:03 crc kubenswrapper[4688]: I0123 18:25:03.058614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7czjf\" (UniqueName: \"kubernetes.io/projected/2592fa6b-08d5-4d04-bc61-aa69d8aeef52-kube-api-access-7czjf\") pod \"kube-state-metrics-0\" (UID: \"2592fa6b-08d5-4d04-bc61-aa69d8aeef52\") " pod="openstack/kube-state-metrics-0" Jan 23 18:25:03 crc kubenswrapper[4688]: I0123 18:25:03.140735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7czjf\" (UniqueName: \"kubernetes.io/projected/2592fa6b-08d5-4d04-bc61-aa69d8aeef52-kube-api-access-7czjf\") pod \"kube-state-metrics-0\" (UID: \"2592fa6b-08d5-4d04-bc61-aa69d8aeef52\") " pod="openstack/kube-state-metrics-0" Jan 23 18:25:03 crc kubenswrapper[4688]: I0123 18:25:03.153685 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.147110 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.185268 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.208052 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.229404 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.254417 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7vbgs" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.267200 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.267236 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.268969 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.269229 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.272846 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.286506 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.286757 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgftj\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-kube-api-access-xgftj\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430505 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430546 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430588 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430624 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f2402796-b932-490a-852b-3e76ebe62cb9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430670 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430696 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-config\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430727 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430755 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.430809 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532105 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532174 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-config\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532224 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532295 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532401 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgftj\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-kube-api-access-xgftj\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532435 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532472 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532496 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.532526 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f2402796-b932-490a-852b-3e76ebe62cb9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.534555 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.536677 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.537546 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.537836 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.537867 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b863878884b5da2d8536161babd136087c9985963bc488b510e2c38ec292fd7e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.539078 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f2402796-b932-490a-852b-3e76ebe62cb9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.543870 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.546788 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.559477 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-config\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.564127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.581887 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.607350 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2592fa6b-08d5-4d04-bc61-aa69d8aeef52","Type":"ContainerStarted","Data":"2384d316add0c2ee8bddf5c0299a1116b2b5aaaa9e664fe635a7e3a9292166c2"} Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.609605 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgftj\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-kube-api-access-xgftj\") pod \"prometheus-metric-storage-0\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:04 crc kubenswrapper[4688]: I0123 18:25:04.625369 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.286302 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zl7mq"] Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.288457 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.300455 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bqbdm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.300738 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.301607 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.315437 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zl7mq"] Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.328087 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rjmgm"] Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.331985 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.337302 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rjmgm"] Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.416723 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-etc-ovs\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.420543 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-scripts\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.420609 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/c58b6a90-e622-44bd-824a-7bc35f16190e-kube-api-access-fvl4b\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.420669 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58b6a90-e622-44bd-824a-7bc35f16190e-ovn-controller-tls-certs\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.420776 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-run\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.420870 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58b6a90-e622-44bd-824a-7bc35f16190e-combined-ca-bundle\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.420956 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-lib\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.420984 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-run-ovn\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.421040 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drpm7\" (UniqueName: \"kubernetes.io/projected/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-kube-api-access-drpm7\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.421067 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c58b6a90-e622-44bd-824a-7bc35f16190e-scripts\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.421100 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-run\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.421164 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-log\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.421241 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-log-ovn\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.424532 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524355 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-log\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524433 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-log-ovn\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524493 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-etc-ovs\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524527 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-scripts\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524551 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/c58b6a90-e622-44bd-824a-7bc35f16190e-kube-api-access-fvl4b\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524573 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58b6a90-e622-44bd-824a-7bc35f16190e-ovn-controller-tls-certs\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524609 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-run\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58b6a90-e622-44bd-824a-7bc35f16190e-combined-ca-bundle\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524684 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-lib\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524707 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-run-ovn\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524737 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c58b6a90-e622-44bd-824a-7bc35f16190e-scripts\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524758 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drpm7\" (UniqueName: \"kubernetes.io/projected/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-kube-api-access-drpm7\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524781 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-run\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.524951 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-log\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.525102 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-run\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.525174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-run\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.525264 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-log-ovn\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.527614 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c58b6a90-e622-44bd-824a-7bc35f16190e-var-run-ovn\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.527842 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-var-lib\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.530303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-etc-ovs\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.530671 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-scripts\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.532570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c58b6a90-e622-44bd-824a-7bc35f16190e-scripts\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.540389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c58b6a90-e622-44bd-824a-7bc35f16190e-ovn-controller-tls-certs\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.556640 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drpm7\" (UniqueName: \"kubernetes.io/projected/99ba3329-3970-44e1-b6b0-c4c6a6db2b96-kube-api-access-drpm7\") pod \"ovn-controller-ovs-rjmgm\" (UID: \"99ba3329-3970-44e1-b6b0-c4c6a6db2b96\") " pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.561257 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvl4b\" (UniqueName: \"kubernetes.io/projected/c58b6a90-e622-44bd-824a-7bc35f16190e-kube-api-access-fvl4b\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.563806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58b6a90-e622-44bd-824a-7bc35f16190e-combined-ca-bundle\") pod \"ovn-controller-zl7mq\" (UID: \"c58b6a90-e622-44bd-824a-7bc35f16190e\") " pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.623173 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.658526 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerStarted","Data":"6629d82bdc86fc70a07c626096714670cab7e1076acf98f24e83b771491ecf31"} Jan 23 18:25:05 crc kubenswrapper[4688]: I0123 18:25:05.680893 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.435556 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zl7mq"] Jan 23 18:25:06 crc kubenswrapper[4688]: W0123 18:25:06.450357 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc58b6a90_e622_44bd_824a_7bc35f16190e.slice/crio-05379220bfb5d72f096141b2e16c0bb4e1ce61902ed8b4d61373645b756f986f WatchSource:0}: Error finding container 05379220bfb5d72f096141b2e16c0bb4e1ce61902ed8b4d61373645b756f986f: Status 404 returned error can't find the container with id 05379220bfb5d72f096141b2e16c0bb4e1ce61902ed8b4d61373645b756f986f Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.720414 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq" event={"ID":"c58b6a90-e622-44bd-824a-7bc35f16190e","Type":"ContainerStarted","Data":"05379220bfb5d72f096141b2e16c0bb4e1ce61902ed8b4d61373645b756f986f"} Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.824703 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.826980 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.838451 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.838717 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.838909 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.839446 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.852848 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.857014 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5vsws" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864577 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864701 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7ht\" (UniqueName: \"kubernetes.io/projected/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-kube-api-access-jj7ht\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864750 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864817 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864845 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864871 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.864899 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:06 crc kubenswrapper[4688]: I0123 18:25:06.891133 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rjmgm"] Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966334 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966447 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966486 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7ht\" (UniqueName: \"kubernetes.io/projected/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-kube-api-access-jj7ht\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966528 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966584 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966620 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966638 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.966657 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.968231 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.968595 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-config\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.969346 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:06.968858 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.017676 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.018119 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.018244 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.026198 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7ht\" (UniqueName: \"kubernetes.io/projected/ed6ebe9c-b75e-42b7-81ce-70c82b890fa4-kube-api-access-jj7ht\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.046857 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4\") " pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.203709 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.328631 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2mkcg"] Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.330209 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.344746 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2mkcg"] Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.351566 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.478009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb62b62e-86fd-434f-be45-f29d9ae27c76-config\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.478228 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb62b62e-86fd-434f-be45-f29d9ae27c76-ovs-rundir\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.478336 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62b62e-86fd-434f-be45-f29d9ae27c76-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.478417 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb62b62e-86fd-434f-be45-f29d9ae27c76-ovn-rundir\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.478465 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb2dm\" (UniqueName: \"kubernetes.io/projected/cb62b62e-86fd-434f-be45-f29d9ae27c76-kube-api-access-lb2dm\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.478502 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb62b62e-86fd-434f-be45-f29d9ae27c76-combined-ca-bundle\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.580161 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb62b62e-86fd-434f-be45-f29d9ae27c76-ovn-rundir\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.580740 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb2dm\" (UniqueName: \"kubernetes.io/projected/cb62b62e-86fd-434f-be45-f29d9ae27c76-kube-api-access-lb2dm\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.580799 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb62b62e-86fd-434f-be45-f29d9ae27c76-combined-ca-bundle\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.580842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb62b62e-86fd-434f-be45-f29d9ae27c76-config\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.580932 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb62b62e-86fd-434f-be45-f29d9ae27c76-ovs-rundir\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.581017 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62b62e-86fd-434f-be45-f29d9ae27c76-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.582468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb62b62e-86fd-434f-be45-f29d9ae27c76-ovn-rundir\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.582918 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb62b62e-86fd-434f-be45-f29d9ae27c76-config\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.583045 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb62b62e-86fd-434f-be45-f29d9ae27c76-ovs-rundir\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.589669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb62b62e-86fd-434f-be45-f29d9ae27c76-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.594154 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb62b62e-86fd-434f-be45-f29d9ae27c76-combined-ca-bundle\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.603122 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb2dm\" (UniqueName: \"kubernetes.io/projected/cb62b62e-86fd-434f-be45-f29d9ae27c76-kube-api-access-lb2dm\") pod \"ovn-controller-metrics-2mkcg\" (UID: \"cb62b62e-86fd-434f-be45-f29d9ae27c76\") " pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:07 crc kubenswrapper[4688]: I0123 18:25:07.659034 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2mkcg" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.614238 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.616306 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.622586 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-q7dgz" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.622943 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.623134 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.623427 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.632237 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741334 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741391 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-config\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741446 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741473 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741584 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741626 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741657 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.741687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg2j4\" (UniqueName: \"kubernetes.io/projected/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-kube-api-access-kg2j4\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.848117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.848333 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.852635 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.852960 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg2j4\" (UniqueName: \"kubernetes.io/projected/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-kube-api-access-kg2j4\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.853082 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.853117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-config\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.853282 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.853336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:09 crc kubenswrapper[4688]: I0123 18:25:09.854316 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.869856 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.873943 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-config\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.875963 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.876526 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.881400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.881675 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg2j4\" (UniqueName: \"kubernetes.io/projected/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-kube-api-access-kg2j4\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.886406 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d2a676-bc2c-43fe-8195-8ae8300f7c8c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:10 crc kubenswrapper[4688]: I0123 18:25:10.919233 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"11d2a676-bc2c-43fe-8195-8ae8300f7c8c\") " pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:11 crc kubenswrapper[4688]: I0123 18:25:11.147813 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:20 crc kubenswrapper[4688]: I0123 18:25:20.958251 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rjmgm" event={"ID":"99ba3329-3970-44e1-b6b0-c4c6a6db2b96","Type":"ContainerStarted","Data":"91146003d27dbfd3a8a968bf8acea5bd9134b7106159ba73d0a5cf6664ebe3cc"} Jan 23 18:25:21 crc kubenswrapper[4688]: E0123 18:25:21.247222 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 23 18:25:21 crc kubenswrapper[4688]: E0123 18:25:21.247522 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgrxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(5bf89cbd-9a52-45b0-8e35-1e070a678aea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:25:21 crc kubenswrapper[4688]: E0123 18:25:21.249369 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" Jan 23 18:25:21 crc kubenswrapper[4688]: E0123 18:25:21.967804 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" Jan 23 18:25:27 crc kubenswrapper[4688]: E0123 18:25:27.510521 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 23 18:25:27 crc kubenswrapper[4688]: E0123 18:25:27.511359 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lxcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(e4d36723-6a61-470a-9107-e5e8cf1c49a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:25:27 crc kubenswrapper[4688]: E0123 18:25:27.512619 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" Jan 23 18:25:28 crc kubenswrapper[4688]: E0123 18:25:28.034509 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" Jan 23 18:25:28 crc kubenswrapper[4688]: E0123 18:25:28.907562 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 23 18:25:28 crc kubenswrapper[4688]: E0123 18:25:28.908062 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n677h5fdh8h56fh5b5h569h89hc5h575h55fh578h5cfh566h89h599h8bh648h5fbh564h85hf5h5c5h594hdbh695hd8h5f7h68fhb9h5fh559h5c6q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5vl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:25:28 crc kubenswrapper[4688]: E0123 18:25:28.909913 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c" Jan 23 18:25:29 crc kubenswrapper[4688]: E0123 18:25:29.031970 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.067263 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.068568 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-588cc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-5nd8k_openstack(1bfd835d-1fb9-40e2-b28d-a081f287cfdb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.070579 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" podUID="1bfd835d-1fb9-40e2-b28d-a081f287cfdb" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.096962 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" podUID="1bfd835d-1fb9-40e2-b28d-a081f287cfdb" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.142368 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.142672 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sglp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5zvxl_openstack(5db6a74b-02de-4be4-b074-2f1e0002d74d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.143882 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" podUID="5db6a74b-02de-4be4-b074-2f1e0002d74d" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.148560 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.148842 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cz79q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-zw5fk_openstack(f9bae02c-8813-4e0f-8781-7242cb10fd50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.150023 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" podUID="f9bae02c-8813-4e0f-8781-7242cb10fd50" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.174403 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.175290 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wjs9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7kk28_openstack(725b1457-ea72-4a1a-9d1a-59db2c9894dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:25:36 crc kubenswrapper[4688]: E0123 18:25:36.176415 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" podUID="725b1457-ea72-4a1a-9d1a-59db2c9894dc" Jan 23 18:25:36 crc kubenswrapper[4688]: I0123 18:25:36.614028 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2mkcg"] Jan 23 18:25:36 crc kubenswrapper[4688]: I0123 18:25:36.703538 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 18:25:36 crc kubenswrapper[4688]: W0123 18:25:36.733176 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded6ebe9c_b75e_42b7_81ce_70c82b890fa4.slice/crio-11216f14cb628ce96626e7adff979a1b5a1c14319117ebc8d1d5170e96e92b06 WatchSource:0}: Error finding container 11216f14cb628ce96626e7adff979a1b5a1c14319117ebc8d1d5170e96e92b06: Status 404 returned error can't find the container with id 11216f14cb628ce96626e7adff979a1b5a1c14319117ebc8d1d5170e96e92b06 Jan 23 18:25:37 crc kubenswrapper[4688]: I0123 18:25:37.045651 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 18:25:37 crc kubenswrapper[4688]: I0123 18:25:37.104913 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2mkcg" event={"ID":"cb62b62e-86fd-434f-be45-f29d9ae27c76","Type":"ContainerStarted","Data":"393e913cb9f17d096988ca385bff9af136c4ec8d549049400e169d6947e3ee97"} Jan 23 18:25:37 crc kubenswrapper[4688]: I0123 18:25:37.106788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4","Type":"ContainerStarted","Data":"11216f14cb628ce96626e7adff979a1b5a1c14319117ebc8d1d5170e96e92b06"} Jan 23 18:25:37 crc kubenswrapper[4688]: E0123 18:25:37.112259 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" podUID="f9bae02c-8813-4e0f-8781-7242cb10fd50" Jan 23 18:25:37 crc kubenswrapper[4688]: I0123 18:25:37.980309 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:25:37 crc kubenswrapper[4688]: I0123 18:25:37.991264 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.062287 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sglp8\" (UniqueName: \"kubernetes.io/projected/5db6a74b-02de-4be4-b074-2f1e0002d74d-kube-api-access-sglp8\") pod \"5db6a74b-02de-4be4-b074-2f1e0002d74d\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.062481 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-config\") pod \"5db6a74b-02de-4be4-b074-2f1e0002d74d\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.062525 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-dns-svc\") pod \"5db6a74b-02de-4be4-b074-2f1e0002d74d\" (UID: \"5db6a74b-02de-4be4-b074-2f1e0002d74d\") " Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.062703 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjs9j\" (UniqueName: \"kubernetes.io/projected/725b1457-ea72-4a1a-9d1a-59db2c9894dc-kube-api-access-wjs9j\") pod \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.062781 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725b1457-ea72-4a1a-9d1a-59db2c9894dc-config\") pod \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\" (UID: \"725b1457-ea72-4a1a-9d1a-59db2c9894dc\") " Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.063446 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5db6a74b-02de-4be4-b074-2f1e0002d74d" (UID: "5db6a74b-02de-4be4-b074-2f1e0002d74d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.063511 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/725b1457-ea72-4a1a-9d1a-59db2c9894dc-config" (OuterVolumeSpecName: "config") pod "725b1457-ea72-4a1a-9d1a-59db2c9894dc" (UID: "725b1457-ea72-4a1a-9d1a-59db2c9894dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.063547 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-config" (OuterVolumeSpecName: "config") pod "5db6a74b-02de-4be4-b074-2f1e0002d74d" (UID: "5db6a74b-02de-4be4-b074-2f1e0002d74d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.066623 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db6a74b-02de-4be4-b074-2f1e0002d74d-kube-api-access-sglp8" (OuterVolumeSpecName: "kube-api-access-sglp8") pod "5db6a74b-02de-4be4-b074-2f1e0002d74d" (UID: "5db6a74b-02de-4be4-b074-2f1e0002d74d"). InnerVolumeSpecName "kube-api-access-sglp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.067618 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725b1457-ea72-4a1a-9d1a-59db2c9894dc-kube-api-access-wjs9j" (OuterVolumeSpecName: "kube-api-access-wjs9j") pod "725b1457-ea72-4a1a-9d1a-59db2c9894dc" (UID: "725b1457-ea72-4a1a-9d1a-59db2c9894dc"). InnerVolumeSpecName "kube-api-access-wjs9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.118036 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" event={"ID":"725b1457-ea72-4a1a-9d1a-59db2c9894dc","Type":"ContainerDied","Data":"f8ab76734b87cd7a641950669a67f6967369b9c8737d65eee660a538954c30f0"} Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.118143 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7kk28" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.120565 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.120566 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5zvxl" event={"ID":"5db6a74b-02de-4be4-b074-2f1e0002d74d","Type":"ContainerDied","Data":"b3186b1c5dcdb8efbb7fe2741a34e16bea41d12280566360de1051c68cd394f0"} Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.123856 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"11d2a676-bc2c-43fe-8195-8ae8300f7c8c","Type":"ContainerStarted","Data":"3ad0d4a66d1022bd2d4479f54560d74e60b03c1648a3ee75de36e60651643226"} Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.165752 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sglp8\" (UniqueName: \"kubernetes.io/projected/5db6a74b-02de-4be4-b074-2f1e0002d74d-kube-api-access-sglp8\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.165832 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.165851 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5db6a74b-02de-4be4-b074-2f1e0002d74d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.165865 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjs9j\" (UniqueName: \"kubernetes.io/projected/725b1457-ea72-4a1a-9d1a-59db2c9894dc-kube-api-access-wjs9j\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.165901 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/725b1457-ea72-4a1a-9d1a-59db2c9894dc-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.196265 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7kk28"] Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.213214 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7kk28"] Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.236639 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5zvxl"] Jan 23 18:25:38 crc kubenswrapper[4688]: I0123 18:25:38.244117 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5zvxl"] Jan 23 18:25:39 crc kubenswrapper[4688]: I0123 18:25:39.138420 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697e30b7-f8ce-45c0-8299-b6021b11a639","Type":"ContainerStarted","Data":"6ea9e5ce2ae04d6d81abf45d9689324d344a6cc68a2fb20f56fb9cfe24e28e93"} Jan 23 18:25:39 crc kubenswrapper[4688]: I0123 18:25:39.392495 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db6a74b-02de-4be4-b074-2f1e0002d74d" path="/var/lib/kubelet/pods/5db6a74b-02de-4be4-b074-2f1e0002d74d/volumes" Jan 23 18:25:39 crc kubenswrapper[4688]: I0123 18:25:39.393532 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="725b1457-ea72-4a1a-9d1a-59db2c9894dc" path="/var/lib/kubelet/pods/725b1457-ea72-4a1a-9d1a-59db2c9894dc/volumes" Jan 23 18:25:44 crc kubenswrapper[4688]: I0123 18:25:44.192065 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerStarted","Data":"585ae3e33bffd05e2b2826ae62c1b0404f2a737a72380e1324b3affb1e54855e"} Jan 23 18:25:44 crc kubenswrapper[4688]: I0123 18:25:44.194418 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c805a15-64d3-4320-940e-a6859affbf9c","Type":"ContainerStarted","Data":"fcf00080a005ae91099580362c5c0b9d9e3aca566fe73e25e02cca4f10994986"} Jan 23 18:25:45 crc kubenswrapper[4688]: I0123 18:25:45.206818 4688 generic.go:334] "Generic (PLEG): container finished" podID="99ba3329-3970-44e1-b6b0-c4c6a6db2b96" containerID="e7f19ee6d64df2db96633c988207a69bf1a7bf7902dcb360d3fb9e861e37d7be" exitCode=0 Jan 23 18:25:45 crc kubenswrapper[4688]: I0123 18:25:45.206945 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rjmgm" event={"ID":"99ba3329-3970-44e1-b6b0-c4c6a6db2b96","Type":"ContainerDied","Data":"e7f19ee6d64df2db96633c988207a69bf1a7bf7902dcb360d3fb9e861e37d7be"} Jan 23 18:25:45 crc kubenswrapper[4688]: I0123 18:25:45.212265 4688 generic.go:334] "Generic (PLEG): container finished" podID="697e30b7-f8ce-45c0-8299-b6021b11a639" containerID="6ea9e5ce2ae04d6d81abf45d9689324d344a6cc68a2fb20f56fb9cfe24e28e93" exitCode=0 Jan 23 18:25:45 crc kubenswrapper[4688]: I0123 18:25:45.213337 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697e30b7-f8ce-45c0-8299-b6021b11a639","Type":"ContainerDied","Data":"6ea9e5ce2ae04d6d81abf45d9689324d344a6cc68a2fb20f56fb9cfe24e28e93"} Jan 23 18:25:46 crc kubenswrapper[4688]: I0123 18:25:46.229847 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697e30b7-f8ce-45c0-8299-b6021b11a639","Type":"ContainerStarted","Data":"6bc11c99bc6694b668374e49153deba422fd47752aa0e5fade4daf603549dc8e"} Jan 23 18:25:46 crc kubenswrapper[4688]: I0123 18:25:46.232605 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4","Type":"ContainerStarted","Data":"7d25be8d0faaab6f968b9ee1acd2aa1542d584a64270d09da6f220aab9ae7467"} Jan 23 18:25:46 crc kubenswrapper[4688]: I0123 18:25:46.267504 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=15.554874562 podStartE2EDuration="47.267480048s" podCreationTimestamp="2026-01-23 18:24:59 +0000 UTC" firstStartedPulling="2026-01-23 18:25:02.027214831 +0000 UTC m=+1097.023039282" lastFinishedPulling="2026-01-23 18:25:33.739820327 +0000 UTC m=+1128.735644768" observedRunningTime="2026-01-23 18:25:46.255509863 +0000 UTC m=+1141.251334324" watchObservedRunningTime="2026-01-23 18:25:46.267480048 +0000 UTC m=+1141.263304489" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.245871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c","Type":"ContainerStarted","Data":"5f0ab1ac99ea916793cfbef356fbf57a807bfbf7c7801a753a029bfb9501a550"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.246810 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.248044 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d36723-6a61-470a-9107-e5e8cf1c49a0","Type":"ContainerStarted","Data":"452d44893c7bbd93eddc82ee7c1bbc84b3793989e71172184890ef83a205acd3"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.250718 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2592fa6b-08d5-4d04-bc61-aa69d8aeef52","Type":"ContainerStarted","Data":"051ed7968e6fd61b3718018de4019cf76ee819bb9d22aa2c7daa44a1adf025cc"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.251076 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.253200 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2mkcg" event={"ID":"cb62b62e-86fd-434f-be45-f29d9ae27c76","Type":"ContainerStarted","Data":"0293c7ef02f2a886322076776db17a542d5886e22879c497db2af4786aa15d47"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.256034 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ed6ebe9c-b75e-42b7-81ce-70c82b890fa4","Type":"ContainerStarted","Data":"5abe959860027afd2131da41e1ed4e8b4d1a5e37cd41fd05ff8d40d187d3f0d1"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.259521 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq" event={"ID":"c58b6a90-e622-44bd-824a-7bc35f16190e","Type":"ContainerStarted","Data":"4902b131cb46b933b1a4eb2ed74213de58f971abe80df47b6e772b1e81507628"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.259641 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zl7mq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.268570 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rjmgm" event={"ID":"99ba3329-3970-44e1-b6b0-c4c6a6db2b96","Type":"ContainerStarted","Data":"23a3d5145586cd85326a5ceb19e741a0f7ba7e52444c9ae411052615a81c1e92"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.268684 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rjmgm" event={"ID":"99ba3329-3970-44e1-b6b0-c4c6a6db2b96","Type":"ContainerStarted","Data":"38d4ac8e85948511cd4382a4b784e62921ed59218b7cfe1ea59315fc28e7a650"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.268764 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.270117 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.944338209 podStartE2EDuration="47.270094642s" podCreationTimestamp="2026-01-23 18:25:00 +0000 UTC" firstStartedPulling="2026-01-23 18:25:02.005874753 +0000 UTC m=+1097.001699194" lastFinishedPulling="2026-01-23 18:25:45.331631186 +0000 UTC m=+1140.327455627" observedRunningTime="2026-01-23 18:25:47.264303223 +0000 UTC m=+1142.260127684" watchObservedRunningTime="2026-01-23 18:25:47.270094642 +0000 UTC m=+1142.265919083" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.271745 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"11d2a676-bc2c-43fe-8195-8ae8300f7c8c","Type":"ContainerStarted","Data":"851793fc53b54164d2947953912f53ca21b19525ebd777faa5c8732a04f2b314"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.271785 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"11d2a676-bc2c-43fe-8195-8ae8300f7c8c","Type":"ContainerStarted","Data":"e72f987d38234fc024c67a1bbd118c99bd7bec41d1a319b3802312a21763a1b6"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.275066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5bf89cbd-9a52-45b0-8e35-1e070a678aea","Type":"ContainerStarted","Data":"c204bcdba9565296476b3294dc89caf2f775ae30d177f4c16ab8aff9f9b3c995"} Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.295956 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zl7mq" podStartSLOduration=10.886003921 podStartE2EDuration="42.295929865s" podCreationTimestamp="2026-01-23 18:25:05 +0000 UTC" firstStartedPulling="2026-01-23 18:25:06.454238296 +0000 UTC m=+1101.450062737" lastFinishedPulling="2026-01-23 18:25:37.86416424 +0000 UTC m=+1132.859988681" observedRunningTime="2026-01-23 18:25:47.28532852 +0000 UTC m=+1142.281152961" watchObservedRunningTime="2026-01-23 18:25:47.295929865 +0000 UTC m=+1142.291754306" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.308278 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2mkcg" podStartSLOduration=31.703039048 podStartE2EDuration="40.308250368s" podCreationTimestamp="2026-01-23 18:25:07 +0000 UTC" firstStartedPulling="2026-01-23 18:25:36.72613788 +0000 UTC m=+1131.721962321" lastFinishedPulling="2026-01-23 18:25:45.3313492 +0000 UTC m=+1140.327173641" observedRunningTime="2026-01-23 18:25:47.304889464 +0000 UTC m=+1142.300713925" watchObservedRunningTime="2026-01-23 18:25:47.308250368 +0000 UTC m=+1142.304074829" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.342931 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.138887389 podStartE2EDuration="45.342901337s" podCreationTimestamp="2026-01-23 18:25:02 +0000 UTC" firstStartedPulling="2026-01-23 18:25:04.15788942 +0000 UTC m=+1099.153713871" lastFinishedPulling="2026-01-23 18:25:45.361903378 +0000 UTC m=+1140.357727819" observedRunningTime="2026-01-23 18:25:47.33578776 +0000 UTC m=+1142.331612201" watchObservedRunningTime="2026-01-23 18:25:47.342901337 +0000 UTC m=+1142.338725778" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.389699 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=34.688196065 podStartE2EDuration="42.389679566s" podCreationTimestamp="2026-01-23 18:25:05 +0000 UTC" firstStartedPulling="2026-01-23 18:25:36.739434805 +0000 UTC m=+1131.735259246" lastFinishedPulling="2026-01-23 18:25:44.440918306 +0000 UTC m=+1139.436742747" observedRunningTime="2026-01-23 18:25:47.37544041 +0000 UTC m=+1142.371264881" watchObservedRunningTime="2026-01-23 18:25:47.389679566 +0000 UTC m=+1142.385504007" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.450411 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rjmgm" podStartSLOduration=24.746048779 podStartE2EDuration="42.450383343s" podCreationTimestamp="2026-01-23 18:25:05 +0000 UTC" firstStartedPulling="2026-01-23 18:25:20.148041154 +0000 UTC m=+1115.143865595" lastFinishedPulling="2026-01-23 18:25:37.852375708 +0000 UTC m=+1132.848200159" observedRunningTime="2026-01-23 18:25:47.443454959 +0000 UTC m=+1142.439279420" watchObservedRunningTime="2026-01-23 18:25:47.450383343 +0000 UTC m=+1142.446207784" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.485005 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=32.040232629 podStartE2EDuration="39.484983831s" podCreationTimestamp="2026-01-23 18:25:08 +0000 UTC" firstStartedPulling="2026-01-23 18:25:37.877322372 +0000 UTC m=+1132.873146813" lastFinishedPulling="2026-01-23 18:25:45.322073574 +0000 UTC m=+1140.317898015" observedRunningTime="2026-01-23 18:25:47.478478137 +0000 UTC m=+1142.474302578" watchObservedRunningTime="2026-01-23 18:25:47.484983831 +0000 UTC m=+1142.480808262" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.664596 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5nd8k"] Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.774115 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vgjnq"] Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.779911 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.783630 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.786257 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vgjnq"] Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.882661 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-config\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.882740 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bzfl\" (UniqueName: \"kubernetes.io/projected/772ab98a-c235-4306-962e-a5a08f25e71f-kube-api-access-7bzfl\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.882779 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.882992 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.952934 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zw5fk"] Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.986856 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-config\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.986938 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzfl\" (UniqueName: \"kubernetes.io/projected/772ab98a-c235-4306-962e-a5a08f25e71f-kube-api-access-7bzfl\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.986974 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.987010 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.990261 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-config\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.990806 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.992092 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:47 crc kubenswrapper[4688]: I0123 18:25:47.996370 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cdkkm"] Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.002385 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.007734 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.030735 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzfl\" (UniqueName: \"kubernetes.io/projected/772ab98a-c235-4306-962e-a5a08f25e71f-kube-api-access-7bzfl\") pod \"dnsmasq-dns-7fd796d7df-vgjnq\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.052700 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cdkkm"] Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.124062 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.194976 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.195048 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.195079 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-config\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.195133 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.195172 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdh6f\" (UniqueName: \"kubernetes.io/projected/620901ff-feb3-42a3-a332-973147a2b0d3-kube-api-access-gdh6f\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.213354 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.296768 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-dns-svc\") pod \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.296960 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-588cc\" (UniqueName: \"kubernetes.io/projected/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-kube-api-access-588cc\") pod \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297208 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-config\") pod \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\" (UID: \"1bfd835d-1fb9-40e2-b28d-a081f287cfdb\") " Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297564 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297605 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297642 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-config\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297739 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdh6f\" (UniqueName: \"kubernetes.io/projected/620901ff-feb3-42a3-a332-973147a2b0d3-kube-api-access-gdh6f\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297595 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1bfd835d-1fb9-40e2-b28d-a081f287cfdb" (UID: "1bfd835d-1fb9-40e2-b28d-a081f287cfdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.297882 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-config" (OuterVolumeSpecName: "config") pod "1bfd835d-1fb9-40e2-b28d-a081f287cfdb" (UID: "1bfd835d-1fb9-40e2-b28d-a081f287cfdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.299014 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.299046 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.299063 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-config\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.299819 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.300016 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.300057 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.300115 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" event={"ID":"1bfd835d-1fb9-40e2-b28d-a081f287cfdb","Type":"ContainerDied","Data":"90101393e56ee8e07d2a4f6bf2a7003b388b96ce4a861f466aa22df755026ee0"} Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.301425 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-5nd8k" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.302378 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.321465 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-kube-api-access-588cc" (OuterVolumeSpecName: "kube-api-access-588cc") pod "1bfd835d-1fb9-40e2-b28d-a081f287cfdb" (UID: "1bfd835d-1fb9-40e2-b28d-a081f287cfdb"). InnerVolumeSpecName "kube-api-access-588cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.322689 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdh6f\" (UniqueName: \"kubernetes.io/projected/620901ff-feb3-42a3-a332-973147a2b0d3-kube-api-access-gdh6f\") pod \"dnsmasq-dns-86db49b7ff-cdkkm\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.380429 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.410347 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-588cc\" (UniqueName: \"kubernetes.io/projected/1bfd835d-1fb9-40e2-b28d-a081f287cfdb-kube-api-access-588cc\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.429333 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.620898 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-config\") pod \"f9bae02c-8813-4e0f-8781-7242cb10fd50\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.621068 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-dns-svc\") pod \"f9bae02c-8813-4e0f-8781-7242cb10fd50\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.621284 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz79q\" (UniqueName: \"kubernetes.io/projected/f9bae02c-8813-4e0f-8781-7242cb10fd50-kube-api-access-cz79q\") pod \"f9bae02c-8813-4e0f-8781-7242cb10fd50\" (UID: \"f9bae02c-8813-4e0f-8781-7242cb10fd50\") " Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.621513 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-config" (OuterVolumeSpecName: "config") pod "f9bae02c-8813-4e0f-8781-7242cb10fd50" (UID: "f9bae02c-8813-4e0f-8781-7242cb10fd50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.621837 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9bae02c-8813-4e0f-8781-7242cb10fd50" (UID: "f9bae02c-8813-4e0f-8781-7242cb10fd50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.627296 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9bae02c-8813-4e0f-8781-7242cb10fd50-kube-api-access-cz79q" (OuterVolumeSpecName: "kube-api-access-cz79q") pod "f9bae02c-8813-4e0f-8781-7242cb10fd50" (UID: "f9bae02c-8813-4e0f-8781-7242cb10fd50"). InnerVolumeSpecName "kube-api-access-cz79q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.640068 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.640111 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9bae02c-8813-4e0f-8781-7242cb10fd50-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.640125 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz79q\" (UniqueName: \"kubernetes.io/projected/f9bae02c-8813-4e0f-8781-7242cb10fd50-kube-api-access-cz79q\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.693675 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5nd8k"] Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.698017 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-5nd8k"] Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.739851 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vgjnq"] Jan 23 18:25:48 crc kubenswrapper[4688]: I0123 18:25:48.974512 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cdkkm"] Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.204370 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.252938 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.312701 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" event={"ID":"620901ff-feb3-42a3-a332-973147a2b0d3","Type":"ContainerStarted","Data":"a870d9fa646ef76b2f79b544a9a5bd374dc41c02b37e427da357462023664c98"} Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.314324 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" event={"ID":"f9bae02c-8813-4e0f-8781-7242cb10fd50","Type":"ContainerDied","Data":"cdedac1ed6e023d7f48b77a64fcd9db9c30b106db901948b6e85ac0beed9e7a9"} Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.314429 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zw5fk" Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.324159 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" event={"ID":"772ab98a-c235-4306-962e-a5a08f25e71f","Type":"ContainerStarted","Data":"ff1d77426d7f0d18f7e5eaba819e734777869e5d0e6e13391b4ab93dab0dac6d"} Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.324359 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.403795 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bfd835d-1fb9-40e2-b28d-a081f287cfdb" path="/var/lib/kubelet/pods/1bfd835d-1fb9-40e2-b28d-a081f287cfdb/volumes" Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.404690 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zw5fk"] Jan 23 18:25:49 crc kubenswrapper[4688]: I0123 18:25:49.404730 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zw5fk"] Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.149552 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.196729 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.333515 4688 generic.go:334] "Generic (PLEG): container finished" podID="4c805a15-64d3-4320-940e-a6859affbf9c" containerID="fcf00080a005ae91099580362c5c0b9d9e3aca566fe73e25e02cca4f10994986" exitCode=0 Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.333620 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c805a15-64d3-4320-940e-a6859affbf9c","Type":"ContainerDied","Data":"fcf00080a005ae91099580362c5c0b9d9e3aca566fe73e25e02cca4f10994986"} Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.339049 4688 generic.go:334] "Generic (PLEG): container finished" podID="620901ff-feb3-42a3-a332-973147a2b0d3" containerID="335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3" exitCode=0 Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.339133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" event={"ID":"620901ff-feb3-42a3-a332-973147a2b0d3","Type":"ContainerDied","Data":"335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3"} Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.341397 4688 generic.go:334] "Generic (PLEG): container finished" podID="772ab98a-c235-4306-962e-a5a08f25e71f" containerID="44547ba5cd94834ed4044217abf955070e628c94062e8be36c713202fd0bc49c" exitCode=0 Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.341482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" event={"ID":"772ab98a-c235-4306-962e-a5a08f25e71f","Type":"ContainerDied","Data":"44547ba5cd94834ed4044217abf955070e628c94062e8be36c713202fd0bc49c"} Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.341856 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.401226 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.734877 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:50 crc kubenswrapper[4688]: I0123 18:25:50.735683 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.352145 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2402796-b932-490a-852b-3e76ebe62cb9" containerID="585ae3e33bffd05e2b2826ae62c1b0404f2a737a72380e1324b3affb1e54855e" exitCode=0 Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.352349 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerDied","Data":"585ae3e33bffd05e2b2826ae62c1b0404f2a737a72380e1324b3affb1e54855e"} Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.355311 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" event={"ID":"772ab98a-c235-4306-962e-a5a08f25e71f","Type":"ContainerStarted","Data":"6322029bc57d69ef9969972fba0034159007aeb7abcb99d2820c19a25dbc2165"} Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.374794 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9bae02c-8813-4e0f-8781-7242cb10fd50" path="/var/lib/kubelet/pods/f9bae02c-8813-4e0f-8781-7242cb10fd50/volumes" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.375417 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.375455 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4c805a15-64d3-4320-940e-a6859affbf9c","Type":"ContainerStarted","Data":"85752a5eff52f58bede2fa0e8936d2a26999c6163f751dfd3f6fcce338b49a52"} Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.375478 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" event={"ID":"620901ff-feb3-42a3-a332-973147a2b0d3","Type":"ContainerStarted","Data":"fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c"} Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.439128 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" podStartSLOduration=3.540218485 podStartE2EDuration="4.439111456s" podCreationTimestamp="2026-01-23 18:25:47 +0000 UTC" firstStartedPulling="2026-01-23 18:25:48.982874609 +0000 UTC m=+1143.978699050" lastFinishedPulling="2026-01-23 18:25:49.88176758 +0000 UTC m=+1144.877592021" observedRunningTime="2026-01-23 18:25:51.438592015 +0000 UTC m=+1146.434416476" watchObservedRunningTime="2026-01-23 18:25:51.439111456 +0000 UTC m=+1146.434935897" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.464561 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.457118126 podStartE2EDuration="54.464532321s" podCreationTimestamp="2026-01-23 18:24:57 +0000 UTC" firstStartedPulling="2026-01-23 18:24:59.819014591 +0000 UTC m=+1094.814839032" lastFinishedPulling="2026-01-23 18:25:36.826428796 +0000 UTC m=+1131.822253227" observedRunningTime="2026-01-23 18:25:51.463979048 +0000 UTC m=+1146.459803489" watchObservedRunningTime="2026-01-23 18:25:51.464532321 +0000 UTC m=+1146.460356762" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.494893 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" podStartSLOduration=4.038421663 podStartE2EDuration="4.494871544s" podCreationTimestamp="2026-01-23 18:25:47 +0000 UTC" firstStartedPulling="2026-01-23 18:25:48.748973697 +0000 UTC m=+1143.744798138" lastFinishedPulling="2026-01-23 18:25:49.205423578 +0000 UTC m=+1144.201248019" observedRunningTime="2026-01-23 18:25:51.485410294 +0000 UTC m=+1146.481234725" watchObservedRunningTime="2026-01-23 18:25:51.494871544 +0000 UTC m=+1146.490695975" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.676702 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.686448 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.695689 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.696254 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.696559 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.696855 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-z9v7h" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.701960 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.716163 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.716548 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.716880 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnzjx\" (UniqueName: \"kubernetes.io/projected/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-kube-api-access-qnzjx\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.717024 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.717215 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-scripts\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.717414 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-config\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.717554 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.819575 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.819659 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-scripts\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.819688 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-config\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.819730 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.819826 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.819871 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.820912 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-scripts\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.821047 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnzjx\" (UniqueName: \"kubernetes.io/projected/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-kube-api-access-qnzjx\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.821503 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.821151 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-config\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.828214 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.828593 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.829055 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:51 crc kubenswrapper[4688]: I0123 18:25:51.845309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnzjx\" (UniqueName: \"kubernetes.io/projected/1d4b65e4-7b44-449a-9505-c5bbc9f67c6c-kube-api-access-qnzjx\") pod \"ovn-northd-0\" (UID: \"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c\") " pod="openstack/ovn-northd-0" Jan 23 18:25:52 crc kubenswrapper[4688]: I0123 18:25:52.011970 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 18:25:52 crc kubenswrapper[4688]: I0123 18:25:52.380992 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:52 crc kubenswrapper[4688]: I0123 18:25:52.382076 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:52 crc kubenswrapper[4688]: I0123 18:25:52.537245 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 18:25:53 crc kubenswrapper[4688]: I0123 18:25:53.159303 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 18:25:53 crc kubenswrapper[4688]: I0123 18:25:53.223802 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:53 crc kubenswrapper[4688]: I0123 18:25:53.329760 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 18:25:53 crc kubenswrapper[4688]: I0123 18:25:53.403466 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c","Type":"ContainerStarted","Data":"659c992319cc0630f9d3e13408eb68f581eb72330c0c635fb108941fca8f0c37"} Jan 23 18:25:54 crc kubenswrapper[4688]: I0123 18:25:54.414819 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c","Type":"ContainerStarted","Data":"0251434e5d579e04e8078a640b41cebeedf6b58b6ad4a5768aa44519c8f177f3"} Jan 23 18:25:54 crc kubenswrapper[4688]: I0123 18:25:54.415164 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1d4b65e4-7b44-449a-9505-c5bbc9f67c6c","Type":"ContainerStarted","Data":"3f7bdc3edf6f21a9c22c6d8f4efd677ef0d5b51b7b867d74f80c99cecfb6759e"} Jan 23 18:25:54 crc kubenswrapper[4688]: I0123 18:25:54.416436 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 18:25:54 crc kubenswrapper[4688]: I0123 18:25:54.445257 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.129586966 podStartE2EDuration="3.445233801s" podCreationTimestamp="2026-01-23 18:25:51 +0000 UTC" firstStartedPulling="2026-01-23 18:25:52.54649965 +0000 UTC m=+1147.542324091" lastFinishedPulling="2026-01-23 18:25:53.862146485 +0000 UTC m=+1148.857970926" observedRunningTime="2026-01-23 18:25:54.437364882 +0000 UTC m=+1149.433189333" watchObservedRunningTime="2026-01-23 18:25:54.445233801 +0000 UTC m=+1149.441058232" Jan 23 18:25:55 crc kubenswrapper[4688]: I0123 18:25:55.968472 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 18:25:58 crc kubenswrapper[4688]: I0123 18:25:58.128601 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:58 crc kubenswrapper[4688]: I0123 18:25:58.385516 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:25:58 crc kubenswrapper[4688]: I0123 18:25:58.477519 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vgjnq"] Jan 23 18:25:58 crc kubenswrapper[4688]: I0123 18:25:58.478010 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" podUID="772ab98a-c235-4306-962e-a5a08f25e71f" containerName="dnsmasq-dns" containerID="cri-o://6322029bc57d69ef9969972fba0034159007aeb7abcb99d2820c19a25dbc2165" gracePeriod=10 Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.082056 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.082161 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.151772 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-46ktv"] Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.153086 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.158627 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.171203 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-46ktv"] Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.227521 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.317194 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68222dfc-9703-4011-b190-a873da963ed4-operator-scripts\") pod \"root-account-create-update-46ktv\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.317729 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnb9n\" (UniqueName: \"kubernetes.io/projected/68222dfc-9703-4011-b190-a873da963ed4-kube-api-access-fnb9n\") pod \"root-account-create-update-46ktv\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.420016 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnb9n\" (UniqueName: \"kubernetes.io/projected/68222dfc-9703-4011-b190-a873da963ed4-kube-api-access-fnb9n\") pod \"root-account-create-update-46ktv\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.420142 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68222dfc-9703-4011-b190-a873da963ed4-operator-scripts\") pod \"root-account-create-update-46ktv\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.423305 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68222dfc-9703-4011-b190-a873da963ed4-operator-scripts\") pod \"root-account-create-update-46ktv\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.444634 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnb9n\" (UniqueName: \"kubernetes.io/projected/68222dfc-9703-4011-b190-a873da963ed4-kube-api-access-fnb9n\") pod \"root-account-create-update-46ktv\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.475981 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46ktv" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.482902 4688 generic.go:334] "Generic (PLEG): container finished" podID="772ab98a-c235-4306-962e-a5a08f25e71f" containerID="6322029bc57d69ef9969972fba0034159007aeb7abcb99d2820c19a25dbc2165" exitCode=0 Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.483048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" event={"ID":"772ab98a-c235-4306-962e-a5a08f25e71f","Type":"ContainerDied","Data":"6322029bc57d69ef9969972fba0034159007aeb7abcb99d2820c19a25dbc2165"} Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.493103 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.594778 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.625962 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bzfl\" (UniqueName: \"kubernetes.io/projected/772ab98a-c235-4306-962e-a5a08f25e71f-kube-api-access-7bzfl\") pod \"772ab98a-c235-4306-962e-a5a08f25e71f\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.626013 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-ovsdbserver-nb\") pod \"772ab98a-c235-4306-962e-a5a08f25e71f\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.626145 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-config\") pod \"772ab98a-c235-4306-962e-a5a08f25e71f\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.626209 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-dns-svc\") pod \"772ab98a-c235-4306-962e-a5a08f25e71f\" (UID: \"772ab98a-c235-4306-962e-a5a08f25e71f\") " Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.635110 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772ab98a-c235-4306-962e-a5a08f25e71f-kube-api-access-7bzfl" (OuterVolumeSpecName: "kube-api-access-7bzfl") pod "772ab98a-c235-4306-962e-a5a08f25e71f" (UID: "772ab98a-c235-4306-962e-a5a08f25e71f"). InnerVolumeSpecName "kube-api-access-7bzfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.697415 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-config" (OuterVolumeSpecName: "config") pod "772ab98a-c235-4306-962e-a5a08f25e71f" (UID: "772ab98a-c235-4306-962e-a5a08f25e71f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.701685 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "772ab98a-c235-4306-962e-a5a08f25e71f" (UID: "772ab98a-c235-4306-962e-a5a08f25e71f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.709743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "772ab98a-c235-4306-962e-a5a08f25e71f" (UID: "772ab98a-c235-4306-962e-a5a08f25e71f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.731808 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bzfl\" (UniqueName: \"kubernetes.io/projected/772ab98a-c235-4306-962e-a5a08f25e71f-kube-api-access-7bzfl\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.731864 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.731883 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:25:59 crc kubenswrapper[4688]: I0123 18:25:59.731897 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/772ab98a-c235-4306-962e-a5a08f25e71f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.104998 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-46ktv"] Jan 23 18:26:00 crc kubenswrapper[4688]: W0123 18:26:00.120866 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68222dfc_9703_4011_b190_a873da963ed4.slice/crio-0a585c82805d66aef7719b62b9fb91646ca1e4eb11dc9e147fd468c1a867a366 WatchSource:0}: Error finding container 0a585c82805d66aef7719b62b9fb91646ca1e4eb11dc9e147fd468c1a867a366: Status 404 returned error can't find the container with id 0a585c82805d66aef7719b62b9fb91646ca1e4eb11dc9e147fd468c1a867a366 Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.502074 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerStarted","Data":"f4561b60e502bb26c5ab460caab8790f517f32e08bc50237ddc636327e42e1ed"} Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.505142 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" event={"ID":"772ab98a-c235-4306-962e-a5a08f25e71f","Type":"ContainerDied","Data":"ff1d77426d7f0d18f7e5eaba819e734777869e5d0e6e13391b4ab93dab0dac6d"} Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.505261 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vgjnq" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.505280 4688 scope.go:117] "RemoveContainer" containerID="6322029bc57d69ef9969972fba0034159007aeb7abcb99d2820c19a25dbc2165" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.509703 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-46ktv" event={"ID":"68222dfc-9703-4011-b190-a873da963ed4","Type":"ContainerStarted","Data":"86fd0dfdc243c7e96f43d02c85738d6306b1fb5ca0706cc203e73249995a5731"} Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.509732 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-46ktv" event={"ID":"68222dfc-9703-4011-b190-a873da963ed4","Type":"ContainerStarted","Data":"0a585c82805d66aef7719b62b9fb91646ca1e4eb11dc9e147fd468c1a867a366"} Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.544474 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-46ktv" podStartSLOduration=1.5444321429999999 podStartE2EDuration="1.544432143s" podCreationTimestamp="2026-01-23 18:25:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:26:00.525521435 +0000 UTC m=+1155.521345876" watchObservedRunningTime="2026-01-23 18:26:00.544432143 +0000 UTC m=+1155.540256584" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.548325 4688 scope.go:117] "RemoveContainer" containerID="44547ba5cd94834ed4044217abf955070e628c94062e8be36c713202fd0bc49c" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.556357 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vgjnq"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.568077 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vgjnq"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.591323 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-n4xx6"] Jan 23 18:26:00 crc kubenswrapper[4688]: E0123 18:26:00.592256 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772ab98a-c235-4306-962e-a5a08f25e71f" containerName="dnsmasq-dns" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.592282 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="772ab98a-c235-4306-962e-a5a08f25e71f" containerName="dnsmasq-dns" Jan 23 18:26:00 crc kubenswrapper[4688]: E0123 18:26:00.592335 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772ab98a-c235-4306-962e-a5a08f25e71f" containerName="init" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.592343 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="772ab98a-c235-4306-962e-a5a08f25e71f" containerName="init" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.592566 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="772ab98a-c235-4306-962e-a5a08f25e71f" containerName="dnsmasq-dns" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.593329 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.604968 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n4xx6"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.697415 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d6d4-account-create-update-xskbs"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.699155 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.704768 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.720997 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d6d4-account-create-update-xskbs"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.756810 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpggf\" (UniqueName: \"kubernetes.io/projected/50b0f293-af50-4dda-9036-2247836670da-kube-api-access-zpggf\") pod \"keystone-db-create-n4xx6\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.756883 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0f293-af50-4dda-9036-2247836670da-operator-scripts\") pod \"keystone-db-create-n4xx6\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.778080 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nlmgx"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.779774 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.800281 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nlmgx"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.859211 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67744\" (UniqueName: \"kubernetes.io/projected/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-kube-api-access-67744\") pod \"keystone-d6d4-account-create-update-xskbs\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.859609 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpggf\" (UniqueName: \"kubernetes.io/projected/50b0f293-af50-4dda-9036-2247836670da-kube-api-access-zpggf\") pod \"keystone-db-create-n4xx6\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.859708 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0f293-af50-4dda-9036-2247836670da-operator-scripts\") pod \"keystone-db-create-n4xx6\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.859796 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-operator-scripts\") pod \"keystone-d6d4-account-create-update-xskbs\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.860093 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-operator-scripts\") pod \"placement-db-create-nlmgx\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.860409 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvw7d\" (UniqueName: \"kubernetes.io/projected/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-kube-api-access-cvw7d\") pod \"placement-db-create-nlmgx\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.861002 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0f293-af50-4dda-9036-2247836670da-operator-scripts\") pod \"keystone-db-create-n4xx6\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.886172 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c94d-account-create-update-gbvch"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.887791 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.890529 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.890977 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpggf\" (UniqueName: \"kubernetes.io/projected/50b0f293-af50-4dda-9036-2247836670da-kube-api-access-zpggf\") pod \"keystone-db-create-n4xx6\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.907417 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c94d-account-create-update-gbvch"] Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.962555 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrnm\" (UniqueName: \"kubernetes.io/projected/e54e40d0-f93c-42db-9efd-f53e6c26730d-kube-api-access-pmrnm\") pod \"placement-c94d-account-create-update-gbvch\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.962966 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-operator-scripts\") pod \"keystone-d6d4-account-create-update-xskbs\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.963261 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-operator-scripts\") pod \"placement-db-create-nlmgx\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.963413 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvw7d\" (UniqueName: \"kubernetes.io/projected/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-kube-api-access-cvw7d\") pod \"placement-db-create-nlmgx\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.963534 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67744\" (UniqueName: \"kubernetes.io/projected/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-kube-api-access-67744\") pod \"keystone-d6d4-account-create-update-xskbs\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.963662 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54e40d0-f93c-42db-9efd-f53e6c26730d-operator-scripts\") pod \"placement-c94d-account-create-update-gbvch\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.964009 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-operator-scripts\") pod \"keystone-d6d4-account-create-update-xskbs\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.964411 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-operator-scripts\") pod \"placement-db-create-nlmgx\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.968290 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.983263 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67744\" (UniqueName: \"kubernetes.io/projected/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-kube-api-access-67744\") pod \"keystone-d6d4-account-create-update-xskbs\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:00 crc kubenswrapper[4688]: I0123 18:26:00.983349 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvw7d\" (UniqueName: \"kubernetes.io/projected/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-kube-api-access-cvw7d\") pod \"placement-db-create-nlmgx\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.018798 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.065712 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrnm\" (UniqueName: \"kubernetes.io/projected/e54e40d0-f93c-42db-9efd-f53e6c26730d-kube-api-access-pmrnm\") pod \"placement-c94d-account-create-update-gbvch\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.065918 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54e40d0-f93c-42db-9efd-f53e6c26730d-operator-scripts\") pod \"placement-c94d-account-create-update-gbvch\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.066857 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54e40d0-f93c-42db-9efd-f53e6c26730d-operator-scripts\") pod \"placement-c94d-account-create-update-gbvch\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.096878 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrnm\" (UniqueName: \"kubernetes.io/projected/e54e40d0-f93c-42db-9efd-f53e6c26730d-kube-api-access-pmrnm\") pod \"placement-c94d-account-create-update-gbvch\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.107071 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.240137 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.289674 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n4xx6"] Jan 23 18:26:01 crc kubenswrapper[4688]: W0123 18:26:01.297246 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50b0f293_af50_4dda_9036_2247836670da.slice/crio-2c8d078d147130c8a439872310912bb248c07975934a794a3dabb17b3d1d6933 WatchSource:0}: Error finding container 2c8d078d147130c8a439872310912bb248c07975934a794a3dabb17b3d1d6933: Status 404 returned error can't find the container with id 2c8d078d147130c8a439872310912bb248c07975934a794a3dabb17b3d1d6933 Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.398367 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772ab98a-c235-4306-962e-a5a08f25e71f" path="/var/lib/kubelet/pods/772ab98a-c235-4306-962e-a5a08f25e71f/volumes" Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.515939 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nlmgx"] Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.524276 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n4xx6" event={"ID":"50b0f293-af50-4dda-9036-2247836670da","Type":"ContainerStarted","Data":"2c8d078d147130c8a439872310912bb248c07975934a794a3dabb17b3d1d6933"} Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.530466 4688 generic.go:334] "Generic (PLEG): container finished" podID="68222dfc-9703-4011-b190-a873da963ed4" containerID="86fd0dfdc243c7e96f43d02c85738d6306b1fb5ca0706cc203e73249995a5731" exitCode=0 Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.530514 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-46ktv" event={"ID":"68222dfc-9703-4011-b190-a873da963ed4","Type":"ContainerDied","Data":"86fd0dfdc243c7e96f43d02c85738d6306b1fb5ca0706cc203e73249995a5731"} Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.602973 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d6d4-account-create-update-xskbs"] Jan 23 18:26:01 crc kubenswrapper[4688]: W0123 18:26:01.856405 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode54e40d0_f93c_42db_9efd_f53e6c26730d.slice/crio-e6544e0f580179352a1de65650e36894b37161698d3cc58ee006c3a6b9093dc7 WatchSource:0}: Error finding container e6544e0f580179352a1de65650e36894b37161698d3cc58ee006c3a6b9093dc7: Status 404 returned error can't find the container with id e6544e0f580179352a1de65650e36894b37161698d3cc58ee006c3a6b9093dc7 Jan 23 18:26:01 crc kubenswrapper[4688]: I0123 18:26:01.862634 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c94d-account-create-update-gbvch"] Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.543138 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c94d-account-create-update-gbvch" event={"ID":"e54e40d0-f93c-42db-9efd-f53e6c26730d","Type":"ContainerStarted","Data":"44403079eba77be7158fa23173b4f341ec6b2eb0eb5eaba6c42d18a8242f4dde"} Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.543546 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c94d-account-create-update-gbvch" event={"ID":"e54e40d0-f93c-42db-9efd-f53e6c26730d","Type":"ContainerStarted","Data":"e6544e0f580179352a1de65650e36894b37161698d3cc58ee006c3a6b9093dc7"} Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.546865 4688 generic.go:334] "Generic (PLEG): container finished" podID="50b0f293-af50-4dda-9036-2247836670da" containerID="bef3f022e90cf656ff2ebab7c2bac4748c7db573642ec397d898e099adfb5c00" exitCode=0 Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.546932 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n4xx6" event={"ID":"50b0f293-af50-4dda-9036-2247836670da","Type":"ContainerDied","Data":"bef3f022e90cf656ff2ebab7c2bac4748c7db573642ec397d898e099adfb5c00"} Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.549371 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nlmgx" event={"ID":"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4","Type":"ContainerStarted","Data":"ffabfd97b87e48d87c24901365ea4b502490159a16f398e49dcfadbea1c36042"} Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.549430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nlmgx" event={"ID":"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4","Type":"ContainerStarted","Data":"38f7ecd561181fd265ac261f067b1b010d94f6ad675a15148160e5c158c2cadd"} Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.551599 4688 generic.go:334] "Generic (PLEG): container finished" podID="56dffca7-1daa-4e5f-ba64-a2dbfac4e428" containerID="763647860de8d45cecf5788b54ff28c4d9c15102d752708bce6aa0e38b5388b0" exitCode=0 Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.551707 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d6d4-account-create-update-xskbs" event={"ID":"56dffca7-1daa-4e5f-ba64-a2dbfac4e428","Type":"ContainerDied","Data":"763647860de8d45cecf5788b54ff28c4d9c15102d752708bce6aa0e38b5388b0"} Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.551769 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d6d4-account-create-update-xskbs" event={"ID":"56dffca7-1daa-4e5f-ba64-a2dbfac4e428","Type":"ContainerStarted","Data":"7a5a985190cade0c712ec651a92ec6ccf837daa040c6c5506e5a8789dcb57620"} Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.568396 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c94d-account-create-update-gbvch" podStartSLOduration=2.568366542 podStartE2EDuration="2.568366542s" podCreationTimestamp="2026-01-23 18:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:26:02.562398929 +0000 UTC m=+1157.558223370" watchObservedRunningTime="2026-01-23 18:26:02.568366542 +0000 UTC m=+1157.564190983" Jan 23 18:26:02 crc kubenswrapper[4688]: I0123 18:26:02.601715 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nlmgx" podStartSLOduration=2.601696118 podStartE2EDuration="2.601696118s" podCreationTimestamp="2026-01-23 18:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:26:02.599869705 +0000 UTC m=+1157.595694156" watchObservedRunningTime="2026-01-23 18:26:02.601696118 +0000 UTC m=+1157.597520559" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.062617 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46ktv" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.226805 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnb9n\" (UniqueName: \"kubernetes.io/projected/68222dfc-9703-4011-b190-a873da963ed4-kube-api-access-fnb9n\") pod \"68222dfc-9703-4011-b190-a873da963ed4\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.226923 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68222dfc-9703-4011-b190-a873da963ed4-operator-scripts\") pod \"68222dfc-9703-4011-b190-a873da963ed4\" (UID: \"68222dfc-9703-4011-b190-a873da963ed4\") " Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.228494 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68222dfc-9703-4011-b190-a873da963ed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68222dfc-9703-4011-b190-a873da963ed4" (UID: "68222dfc-9703-4011-b190-a873da963ed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.236542 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68222dfc-9703-4011-b190-a873da963ed4-kube-api-access-fnb9n" (OuterVolumeSpecName: "kube-api-access-fnb9n") pod "68222dfc-9703-4011-b190-a873da963ed4" (UID: "68222dfc-9703-4011-b190-a873da963ed4"). InnerVolumeSpecName "kube-api-access-fnb9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.243902 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-xrfv5"] Jan 23 18:26:03 crc kubenswrapper[4688]: E0123 18:26:03.244532 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68222dfc-9703-4011-b190-a873da963ed4" containerName="mariadb-account-create-update" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.244558 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="68222dfc-9703-4011-b190-a873da963ed4" containerName="mariadb-account-create-update" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.244764 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="68222dfc-9703-4011-b190-a873da963ed4" containerName="mariadb-account-create-update" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.245620 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.313427 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-xrfv5"] Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.329030 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5e7058-06e1-4c31-b185-61f48f8bd166-operator-scripts\") pod \"watcher-db-create-xrfv5\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.329100 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xk7z\" (UniqueName: \"kubernetes.io/projected/5c5e7058-06e1-4c31-b185-61f48f8bd166-kube-api-access-9xk7z\") pod \"watcher-db-create-xrfv5\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.329284 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnb9n\" (UniqueName: \"kubernetes.io/projected/68222dfc-9703-4011-b190-a873da963ed4-kube-api-access-fnb9n\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.329302 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68222dfc-9703-4011-b190-a873da963ed4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.412971 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-0e51-account-create-update-srsmf"] Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.414244 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-0e51-account-create-update-srsmf"] Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.414358 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.418855 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.432215 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5e7058-06e1-4c31-b185-61f48f8bd166-operator-scripts\") pod \"watcher-db-create-xrfv5\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.432300 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xk7z\" (UniqueName: \"kubernetes.io/projected/5c5e7058-06e1-4c31-b185-61f48f8bd166-kube-api-access-9xk7z\") pod \"watcher-db-create-xrfv5\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.435452 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5e7058-06e1-4c31-b185-61f48f8bd166-operator-scripts\") pod \"watcher-db-create-xrfv5\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.464508 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xk7z\" (UniqueName: \"kubernetes.io/projected/5c5e7058-06e1-4c31-b185-61f48f8bd166-kube-api-access-9xk7z\") pod \"watcher-db-create-xrfv5\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.538167 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e2bac7-43b6-484f-af41-54ebc8205242-operator-scripts\") pod \"watcher-0e51-account-create-update-srsmf\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.548649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmp9c\" (UniqueName: \"kubernetes.io/projected/c0e2bac7-43b6-484f-af41-54ebc8205242-kube-api-access-nmp9c\") pod \"watcher-0e51-account-create-update-srsmf\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.572265 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-jrlkz"] Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.576627 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.595867 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.599780 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-46ktv" event={"ID":"68222dfc-9703-4011-b190-a873da963ed4","Type":"ContainerDied","Data":"0a585c82805d66aef7719b62b9fb91646ca1e4eb11dc9e147fd468c1a867a366"} Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.599824 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a585c82805d66aef7719b62b9fb91646ca1e4eb11dc9e147fd468c1a867a366" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.599984 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46ktv" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.637206 4688 generic.go:334] "Generic (PLEG): container finished" podID="e54e40d0-f93c-42db-9efd-f53e6c26730d" containerID="44403079eba77be7158fa23173b4f341ec6b2eb0eb5eaba6c42d18a8242f4dde" exitCode=0 Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.640808 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c94d-account-create-update-gbvch" event={"ID":"e54e40d0-f93c-42db-9efd-f53e6c26730d","Type":"ContainerDied","Data":"44403079eba77be7158fa23173b4f341ec6b2eb0eb5eaba6c42d18a8242f4dde"} Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.663879 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jrlkz"] Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.655744 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.664326 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e2bac7-43b6-484f-af41-54ebc8205242-operator-scripts\") pod \"watcher-0e51-account-create-update-srsmf\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.664437 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp2nw\" (UniqueName: \"kubernetes.io/projected/37ba61eb-0e82-4af5-8756-cc56550dd6ed-kube-api-access-tp2nw\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.664594 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-config\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.664698 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.664769 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmp9c\" (UniqueName: \"kubernetes.io/projected/c0e2bac7-43b6-484f-af41-54ebc8205242-kube-api-access-nmp9c\") pod \"watcher-0e51-account-create-update-srsmf\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.664909 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-dns-svc\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.666002 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e2bac7-43b6-484f-af41-54ebc8205242-operator-scripts\") pod \"watcher-0e51-account-create-update-srsmf\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.702897 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerStarted","Data":"a091e31cb13c10ff1ffc9f8d03db944ee73b04eb586f12292a67f2b8702d9629"} Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.706062 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmp9c\" (UniqueName: \"kubernetes.io/projected/c0e2bac7-43b6-484f-af41-54ebc8205242-kube-api-access-nmp9c\") pod \"watcher-0e51-account-create-update-srsmf\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.718797 4688 generic.go:334] "Generic (PLEG): container finished" podID="6822fdf0-3b76-48d1-92c5-0a6a31f12ae4" containerID="ffabfd97b87e48d87c24901365ea4b502490159a16f398e49dcfadbea1c36042" exitCode=0 Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.719550 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nlmgx" event={"ID":"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4","Type":"ContainerDied","Data":"ffabfd97b87e48d87c24901365ea4b502490159a16f398e49dcfadbea1c36042"} Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.759729 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.773609 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp2nw\" (UniqueName: \"kubernetes.io/projected/37ba61eb-0e82-4af5-8756-cc56550dd6ed-kube-api-access-tp2nw\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.773707 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-config\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.773750 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.773802 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-dns-svc\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.773855 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.774947 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-config\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.774971 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-dns-svc\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.775417 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.786722 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.796513 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp2nw\" (UniqueName: \"kubernetes.io/projected/37ba61eb-0e82-4af5-8756-cc56550dd6ed-kube-api-access-tp2nw\") pod \"dnsmasq-dns-698758b865-jrlkz\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:03 crc kubenswrapper[4688]: I0123 18:26:03.920165 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.368205 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-xrfv5"] Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.369985 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.487800 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpggf\" (UniqueName: \"kubernetes.io/projected/50b0f293-af50-4dda-9036-2247836670da-kube-api-access-zpggf\") pod \"50b0f293-af50-4dda-9036-2247836670da\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.488323 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0f293-af50-4dda-9036-2247836670da-operator-scripts\") pod \"50b0f293-af50-4dda-9036-2247836670da\" (UID: \"50b0f293-af50-4dda-9036-2247836670da\") " Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.498433 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b0f293-af50-4dda-9036-2247836670da-kube-api-access-zpggf" (OuterVolumeSpecName: "kube-api-access-zpggf") pod "50b0f293-af50-4dda-9036-2247836670da" (UID: "50b0f293-af50-4dda-9036-2247836670da"). InnerVolumeSpecName "kube-api-access-zpggf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.533628 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b0f293-af50-4dda-9036-2247836670da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50b0f293-af50-4dda-9036-2247836670da" (UID: "50b0f293-af50-4dda-9036-2247836670da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.572798 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:04 crc kubenswrapper[4688]: W0123 18:26:04.578903 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0e2bac7_43b6_484f_af41_54ebc8205242.slice/crio-2d6e2872bde6e246459ae850609efc007b0fa10b4fd82b75e7ce276798732b57 WatchSource:0}: Error finding container 2d6e2872bde6e246459ae850609efc007b0fa10b4fd82b75e7ce276798732b57: Status 404 returned error can't find the container with id 2d6e2872bde6e246459ae850609efc007b0fa10b4fd82b75e7ce276798732b57 Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.593485 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-0e51-account-create-update-srsmf"] Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.604748 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50b0f293-af50-4dda-9036-2247836670da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.604786 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpggf\" (UniqueName: \"kubernetes.io/projected/50b0f293-af50-4dda-9036-2247836670da-kube-api-access-zpggf\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.709954 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67744\" (UniqueName: \"kubernetes.io/projected/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-kube-api-access-67744\") pod \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.710403 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-operator-scripts\") pod \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\" (UID: \"56dffca7-1daa-4e5f-ba64-a2dbfac4e428\") " Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.711122 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56dffca7-1daa-4e5f-ba64-a2dbfac4e428" (UID: "56dffca7-1daa-4e5f-ba64-a2dbfac4e428"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.712398 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 18:26:04 crc kubenswrapper[4688]: E0123 18:26:04.715742 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dffca7-1daa-4e5f-ba64-a2dbfac4e428" containerName="mariadb-account-create-update" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.715787 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dffca7-1daa-4e5f-ba64-a2dbfac4e428" containerName="mariadb-account-create-update" Jan 23 18:26:04 crc kubenswrapper[4688]: E0123 18:26:04.715846 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b0f293-af50-4dda-9036-2247836670da" containerName="mariadb-database-create" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.715854 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b0f293-af50-4dda-9036-2247836670da" containerName="mariadb-database-create" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.716224 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b0f293-af50-4dda-9036-2247836670da" containerName="mariadb-database-create" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.716245 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dffca7-1daa-4e5f-ba64-a2dbfac4e428" containerName="mariadb-account-create-update" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.716512 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-kube-api-access-67744" (OuterVolumeSpecName: "kube-api-access-67744") pod "56dffca7-1daa-4e5f-ba64-a2dbfac4e428" (UID: "56dffca7-1daa-4e5f-ba64-a2dbfac4e428"). InnerVolumeSpecName "kube-api-access-67744". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:04 crc kubenswrapper[4688]: W0123 18:26:04.744071 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37ba61eb_0e82_4af5_8756_cc56550dd6ed.slice/crio-aed0d649cccaa5ebbc9920c5a4cc95b878fa4088ae23f95e5d2d735ed40e13ad WatchSource:0}: Error finding container aed0d649cccaa5ebbc9920c5a4cc95b878fa4088ae23f95e5d2d735ed40e13ad: Status 404 returned error can't find the container with id aed0d649cccaa5ebbc9920c5a4cc95b878fa4088ae23f95e5d2d735ed40e13ad Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.744810 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d6d4-account-create-update-xskbs" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.759120 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n4xx6" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788325 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0e51-account-create-update-srsmf" event={"ID":"c0e2bac7-43b6-484f-af41-54ebc8205242","Type":"ContainerStarted","Data":"2d6e2872bde6e246459ae850609efc007b0fa10b4fd82b75e7ce276798732b57"} Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788382 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d6d4-account-create-update-xskbs" event={"ID":"56dffca7-1daa-4e5f-ba64-a2dbfac4e428","Type":"ContainerDied","Data":"7a5a985190cade0c712ec651a92ec6ccf837daa040c6c5506e5a8789dcb57620"} Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788406 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a5a985190cade0c712ec651a92ec6ccf837daa040c6c5506e5a8789dcb57620" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788424 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xrfv5" event={"ID":"5c5e7058-06e1-4c31-b185-61f48f8bd166","Type":"ContainerStarted","Data":"213dfce23d9aa13d7cf4957e08fa100e24e962c9d11500a1a621c61a6d464ae6"} Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788445 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jrlkz"] Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788468 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n4xx6" event={"ID":"50b0f293-af50-4dda-9036-2247836670da","Type":"ContainerDied","Data":"2c8d078d147130c8a439872310912bb248c07975934a794a3dabb17b3d1d6933"} Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788482 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8d078d147130c8a439872310912bb248c07975934a794a3dabb17b3d1d6933" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788495 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.788639 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.799111 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.799162 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.799494 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.800491 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-77gl9" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.814967 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.815003 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67744\" (UniqueName: \"kubernetes.io/projected/56dffca7-1daa-4e5f-ba64-a2dbfac4e428-kube-api-access-67744\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.927142 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.927521 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb24002-aac7-4341-b434-58189d7792e5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.927553 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9hq\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-kube-api-access-px9hq\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.927583 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ccb24002-aac7-4341-b434-58189d7792e5-lock\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.927795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:04 crc kubenswrapper[4688]: I0123 18:26:04.927967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ccb24002-aac7-4341-b434-58189d7792e5-cache\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.030104 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.030542 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ccb24002-aac7-4341-b434-58189d7792e5-cache\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.030634 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.030698 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb24002-aac7-4341-b434-58189d7792e5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.030732 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9hq\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-kube-api-access-px9hq\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.030768 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ccb24002-aac7-4341-b434-58189d7792e5-lock\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: E0123 18:26:05.030383 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 18:26:05 crc kubenswrapper[4688]: E0123 18:26:05.031271 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 18:26:05 crc kubenswrapper[4688]: E0123 18:26:05.031368 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift podName:ccb24002-aac7-4341-b434-58189d7792e5 nodeName:}" failed. No retries permitted until 2026-01-23 18:26:05.531341733 +0000 UTC m=+1160.527166374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift") pod "swift-storage-0" (UID: "ccb24002-aac7-4341-b434-58189d7792e5") : configmap "swift-ring-files" not found Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.031454 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.031480 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ccb24002-aac7-4341-b434-58189d7792e5-lock\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.032152 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ccb24002-aac7-4341-b434-58189d7792e5-cache\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.040420 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb24002-aac7-4341-b434-58189d7792e5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.052679 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9hq\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-kube-api-access-px9hq\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.061598 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.370752 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.439415 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.542330 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvw7d\" (UniqueName: \"kubernetes.io/projected/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-kube-api-access-cvw7d\") pod \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.542491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-operator-scripts\") pod \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\" (UID: \"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4\") " Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.543480 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6822fdf0-3b76-48d1-92c5-0a6a31f12ae4" (UID: "6822fdf0-3b76-48d1-92c5-0a6a31f12ae4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.543684 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmrnm\" (UniqueName: \"kubernetes.io/projected/e54e40d0-f93c-42db-9efd-f53e6c26730d-kube-api-access-pmrnm\") pod \"e54e40d0-f93c-42db-9efd-f53e6c26730d\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.543759 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54e40d0-f93c-42db-9efd-f53e6c26730d-operator-scripts\") pod \"e54e40d0-f93c-42db-9efd-f53e6c26730d\" (UID: \"e54e40d0-f93c-42db-9efd-f53e6c26730d\") " Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.544321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.544913 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:05 crc kubenswrapper[4688]: E0123 18:26:05.545090 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 18:26:05 crc kubenswrapper[4688]: E0123 18:26:05.545107 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 18:26:05 crc kubenswrapper[4688]: E0123 18:26:05.545175 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift podName:ccb24002-aac7-4341-b434-58189d7792e5 nodeName:}" failed. No retries permitted until 2026-01-23 18:26:06.545156993 +0000 UTC m=+1161.540981434 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift") pod "swift-storage-0" (UID: "ccb24002-aac7-4341-b434-58189d7792e5") : configmap "swift-ring-files" not found Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.545795 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54e40d0-f93c-42db-9efd-f53e6c26730d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e54e40d0-f93c-42db-9efd-f53e6c26730d" (UID: "e54e40d0-f93c-42db-9efd-f53e6c26730d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.597295 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54e40d0-f93c-42db-9efd-f53e6c26730d-kube-api-access-pmrnm" (OuterVolumeSpecName: "kube-api-access-pmrnm") pod "e54e40d0-f93c-42db-9efd-f53e6c26730d" (UID: "e54e40d0-f93c-42db-9efd-f53e6c26730d"). InnerVolumeSpecName "kube-api-access-pmrnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.607552 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-kube-api-access-cvw7d" (OuterVolumeSpecName: "kube-api-access-cvw7d") pod "6822fdf0-3b76-48d1-92c5-0a6a31f12ae4" (UID: "6822fdf0-3b76-48d1-92c5-0a6a31f12ae4"). InnerVolumeSpecName "kube-api-access-cvw7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.646897 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvw7d\" (UniqueName: \"kubernetes.io/projected/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4-kube-api-access-cvw7d\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.647229 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmrnm\" (UniqueName: \"kubernetes.io/projected/e54e40d0-f93c-42db-9efd-f53e6c26730d-kube-api-access-pmrnm\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.647239 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e54e40d0-f93c-42db-9efd-f53e6c26730d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.771077 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jrlkz" event={"ID":"37ba61eb-0e82-4af5-8756-cc56550dd6ed","Type":"ContainerStarted","Data":"aed0d649cccaa5ebbc9920c5a4cc95b878fa4088ae23f95e5d2d735ed40e13ad"} Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.772479 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c94d-account-create-update-gbvch" event={"ID":"e54e40d0-f93c-42db-9efd-f53e6c26730d","Type":"ContainerDied","Data":"e6544e0f580179352a1de65650e36894b37161698d3cc58ee006c3a6b9093dc7"} Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.772513 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6544e0f580179352a1de65650e36894b37161698d3cc58ee006c3a6b9093dc7" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.772603 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c94d-account-create-update-gbvch" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.775374 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nlmgx" event={"ID":"6822fdf0-3b76-48d1-92c5-0a6a31f12ae4","Type":"ContainerDied","Data":"38f7ecd561181fd265ac261f067b1b010d94f6ad675a15148160e5c158c2cadd"} Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.775433 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f7ecd561181fd265ac261f067b1b010d94f6ad675a15148160e5c158c2cadd" Jan 23 18:26:05 crc kubenswrapper[4688]: I0123 18:26:05.775478 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nlmgx" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.152282 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nxsr6"] Jan 23 18:26:06 crc kubenswrapper[4688]: E0123 18:26:06.152923 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6822fdf0-3b76-48d1-92c5-0a6a31f12ae4" containerName="mariadb-database-create" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.152949 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6822fdf0-3b76-48d1-92c5-0a6a31f12ae4" containerName="mariadb-database-create" Jan 23 18:26:06 crc kubenswrapper[4688]: E0123 18:26:06.152991 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54e40d0-f93c-42db-9efd-f53e6c26730d" containerName="mariadb-account-create-update" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.153001 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54e40d0-f93c-42db-9efd-f53e6c26730d" containerName="mariadb-account-create-update" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.153327 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54e40d0-f93c-42db-9efd-f53e6c26730d" containerName="mariadb-account-create-update" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.153356 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6822fdf0-3b76-48d1-92c5-0a6a31f12ae4" containerName="mariadb-database-create" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.154327 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.164913 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxsr6"] Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.259351 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798fc77a-0ff3-414c-91e1-d747b952faa2-operator-scripts\") pod \"glance-db-create-nxsr6\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.260674 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tkj\" (UniqueName: \"kubernetes.io/projected/798fc77a-0ff3-414c-91e1-d747b952faa2-kube-api-access-l2tkj\") pod \"glance-db-create-nxsr6\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.292012 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e0dd-account-create-update-wjhkg"] Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.294322 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.298552 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.305428 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e0dd-account-create-update-wjhkg"] Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.367588 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/524e08b9-7bbd-4e77-b8ab-901c43fd8283-operator-scripts\") pod \"glance-e0dd-account-create-update-wjhkg\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.367699 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798fc77a-0ff3-414c-91e1-d747b952faa2-operator-scripts\") pod \"glance-db-create-nxsr6\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.367748 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szwh\" (UniqueName: \"kubernetes.io/projected/524e08b9-7bbd-4e77-b8ab-901c43fd8283-kube-api-access-2szwh\") pod \"glance-e0dd-account-create-update-wjhkg\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.367790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tkj\" (UniqueName: \"kubernetes.io/projected/798fc77a-0ff3-414c-91e1-d747b952faa2-kube-api-access-l2tkj\") pod \"glance-db-create-nxsr6\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.368814 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798fc77a-0ff3-414c-91e1-d747b952faa2-operator-scripts\") pod \"glance-db-create-nxsr6\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.401243 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tkj\" (UniqueName: \"kubernetes.io/projected/798fc77a-0ff3-414c-91e1-d747b952faa2-kube-api-access-l2tkj\") pod \"glance-db-create-nxsr6\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.470104 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/524e08b9-7bbd-4e77-b8ab-901c43fd8283-operator-scripts\") pod \"glance-e0dd-account-create-update-wjhkg\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.470314 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szwh\" (UniqueName: \"kubernetes.io/projected/524e08b9-7bbd-4e77-b8ab-901c43fd8283-kube-api-access-2szwh\") pod \"glance-e0dd-account-create-update-wjhkg\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.471394 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/524e08b9-7bbd-4e77-b8ab-901c43fd8283-operator-scripts\") pod \"glance-e0dd-account-create-update-wjhkg\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.475524 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.489942 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szwh\" (UniqueName: \"kubernetes.io/projected/524e08b9-7bbd-4e77-b8ab-901c43fd8283-kube-api-access-2szwh\") pod \"glance-e0dd-account-create-update-wjhkg\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.572738 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:06 crc kubenswrapper[4688]: E0123 18:26:06.572946 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 18:26:06 crc kubenswrapper[4688]: E0123 18:26:06.572970 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 18:26:06 crc kubenswrapper[4688]: E0123 18:26:06.573088 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift podName:ccb24002-aac7-4341-b434-58189d7792e5 nodeName:}" failed. No retries permitted until 2026-01-23 18:26:08.57306121 +0000 UTC m=+1163.568885651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift") pod "swift-storage-0" (UID: "ccb24002-aac7-4341-b434-58189d7792e5") : configmap "swift-ring-files" not found Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.615655 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:06 crc kubenswrapper[4688]: W0123 18:26:06.962892 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod798fc77a_0ff3_414c_91e1_d747b952faa2.slice/crio-283e0ea4707107826f4984911c6a3295a7419f069b1984f2458b61c9e3d8f4eb WatchSource:0}: Error finding container 283e0ea4707107826f4984911c6a3295a7419f069b1984f2458b61c9e3d8f4eb: Status 404 returned error can't find the container with id 283e0ea4707107826f4984911c6a3295a7419f069b1984f2458b61c9e3d8f4eb Jan 23 18:26:06 crc kubenswrapper[4688]: I0123 18:26:06.963292 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nxsr6"] Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.127244 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.141582 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e0dd-account-create-update-wjhkg"] Jan 23 18:26:07 crc kubenswrapper[4688]: W0123 18:26:07.145279 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod524e08b9_7bbd_4e77_b8ab_901c43fd8283.slice/crio-29759c61f33a20f3107e90f7970ffa6c72aaae01e6ec85bd091c1ac37213a71c WatchSource:0}: Error finding container 29759c61f33a20f3107e90f7970ffa6c72aaae01e6ec85bd091c1ac37213a71c: Status 404 returned error can't find the container with id 29759c61f33a20f3107e90f7970ffa6c72aaae01e6ec85bd091c1ac37213a71c Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.613788 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-46ktv"] Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.633491 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-46ktv"] Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.711304 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-82747"] Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.712547 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82747" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.719152 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.723087 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-82747"] Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.798815 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0dd-account-create-update-wjhkg" event={"ID":"524e08b9-7bbd-4e77-b8ab-901c43fd8283","Type":"ContainerStarted","Data":"29759c61f33a20f3107e90f7970ffa6c72aaae01e6ec85bd091c1ac37213a71c"} Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.800699 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxsr6" event={"ID":"798fc77a-0ff3-414c-91e1-d747b952faa2","Type":"ContainerStarted","Data":"283e0ea4707107826f4984911c6a3295a7419f069b1984f2458b61c9e3d8f4eb"} Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.804138 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzzgl\" (UniqueName: \"kubernetes.io/projected/66f04f7e-bee5-4db9-af24-fef76cd579a4-kube-api-access-xzzgl\") pod \"root-account-create-update-82747\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " pod="openstack/root-account-create-update-82747" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.804220 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66f04f7e-bee5-4db9-af24-fef76cd579a4-operator-scripts\") pod \"root-account-create-update-82747\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " pod="openstack/root-account-create-update-82747" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.907307 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzzgl\" (UniqueName: \"kubernetes.io/projected/66f04f7e-bee5-4db9-af24-fef76cd579a4-kube-api-access-xzzgl\") pod \"root-account-create-update-82747\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " pod="openstack/root-account-create-update-82747" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.907719 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66f04f7e-bee5-4db9-af24-fef76cd579a4-operator-scripts\") pod \"root-account-create-update-82747\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " pod="openstack/root-account-create-update-82747" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.908643 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66f04f7e-bee5-4db9-af24-fef76cd579a4-operator-scripts\") pod \"root-account-create-update-82747\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " pod="openstack/root-account-create-update-82747" Jan 23 18:26:07 crc kubenswrapper[4688]: I0123 18:26:07.931067 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzzgl\" (UniqueName: \"kubernetes.io/projected/66f04f7e-bee5-4db9-af24-fef76cd579a4-kube-api-access-xzzgl\") pod \"root-account-create-update-82747\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " pod="openstack/root-account-create-update-82747" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.029499 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82747" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.452754 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vr6nh"] Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.454916 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.458640 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.458839 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.460053 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.472698 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vr6nh"] Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624097 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-scripts\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwdlm\" (UniqueName: \"kubernetes.io/projected/d7367189-3db1-4176-8281-2b50a8b3df49-kube-api-access-gwdlm\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624225 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-combined-ca-bundle\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-swiftconf\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624362 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7367189-3db1-4176-8281-2b50a8b3df49-etc-swift\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624388 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-dispersionconf\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624514 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-ring-data-devices\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.624701 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:08 crc kubenswrapper[4688]: E0123 18:26:08.624910 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 18:26:08 crc kubenswrapper[4688]: E0123 18:26:08.624936 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 18:26:08 crc kubenswrapper[4688]: E0123 18:26:08.625039 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift podName:ccb24002-aac7-4341-b434-58189d7792e5 nodeName:}" failed. No retries permitted until 2026-01-23 18:26:12.6250151 +0000 UTC m=+1167.620839541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift") pod "swift-storage-0" (UID: "ccb24002-aac7-4341-b434-58189d7792e5") : configmap "swift-ring-files" not found Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.704137 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-82747"] Jan 23 18:26:08 crc kubenswrapper[4688]: W0123 18:26:08.716501 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66f04f7e_bee5_4db9_af24_fef76cd579a4.slice/crio-c644009ead1237a0bc14f225a802e371d04aaf09cf1a71bbfcc57790ba53be34 WatchSource:0}: Error finding container c644009ead1237a0bc14f225a802e371d04aaf09cf1a71bbfcc57790ba53be34: Status 404 returned error can't find the container with id c644009ead1237a0bc14f225a802e371d04aaf09cf1a71bbfcc57790ba53be34 Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.726145 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-ring-data-devices\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.726267 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-scripts\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.726293 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwdlm\" (UniqueName: \"kubernetes.io/projected/d7367189-3db1-4176-8281-2b50a8b3df49-kube-api-access-gwdlm\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.726336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-combined-ca-bundle\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.726381 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-swiftconf\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.726431 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7367189-3db1-4176-8281-2b50a8b3df49-etc-swift\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.726458 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-dispersionconf\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.727396 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7367189-3db1-4176-8281-2b50a8b3df49-etc-swift\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.727435 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-scripts\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.727882 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-ring-data-devices\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.731833 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-dispersionconf\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.734853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-swiftconf\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.735079 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-combined-ca-bundle\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.748005 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwdlm\" (UniqueName: \"kubernetes.io/projected/d7367189-3db1-4176-8281-2b50a8b3df49-kube-api-access-gwdlm\") pod \"swift-ring-rebalance-vr6nh\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.797122 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:08 crc kubenswrapper[4688]: I0123 18:26:08.814924 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82747" event={"ID":"66f04f7e-bee5-4db9-af24-fef76cd579a4","Type":"ContainerStarted","Data":"c644009ead1237a0bc14f225a802e371d04aaf09cf1a71bbfcc57790ba53be34"} Jan 23 18:26:09 crc kubenswrapper[4688]: I0123 18:26:09.263491 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vr6nh"] Jan 23 18:26:09 crc kubenswrapper[4688]: W0123 18:26:09.278529 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7367189_3db1_4176_8281_2b50a8b3df49.slice/crio-f63a59f9524b600530e3f040dd270c6c87831dc7534cf56d276339794db66347 WatchSource:0}: Error finding container f63a59f9524b600530e3f040dd270c6c87831dc7534cf56d276339794db66347: Status 404 returned error can't find the container with id f63a59f9524b600530e3f040dd270c6c87831dc7534cf56d276339794db66347 Jan 23 18:26:09 crc kubenswrapper[4688]: I0123 18:26:09.377889 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68222dfc-9703-4011-b190-a873da963ed4" path="/var/lib/kubelet/pods/68222dfc-9703-4011-b190-a873da963ed4/volumes" Jan 23 18:26:09 crc kubenswrapper[4688]: I0123 18:26:09.825962 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vr6nh" event={"ID":"d7367189-3db1-4176-8281-2b50a8b3df49","Type":"ContainerStarted","Data":"f63a59f9524b600530e3f040dd270c6c87831dc7534cf56d276339794db66347"} Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.845214 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c5e7058-06e1-4c31-b185-61f48f8bd166" containerID="396cef205752ceb1d27d7e34a9542203e0b70518485963c66db18fae9e06a4ab" exitCode=0 Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.845340 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xrfv5" event={"ID":"5c5e7058-06e1-4c31-b185-61f48f8bd166","Type":"ContainerDied","Data":"396cef205752ceb1d27d7e34a9542203e0b70518485963c66db18fae9e06a4ab"} Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.853393 4688 generic.go:334] "Generic (PLEG): container finished" podID="798fc77a-0ff3-414c-91e1-d747b952faa2" containerID="e19e1bee8992c2b6ffc64da691b98d4576bd662091a576c1992c5ac2f7aaaeba" exitCode=0 Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.853482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxsr6" event={"ID":"798fc77a-0ff3-414c-91e1-d747b952faa2","Type":"ContainerDied","Data":"e19e1bee8992c2b6ffc64da691b98d4576bd662091a576c1992c5ac2f7aaaeba"} Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.862224 4688 generic.go:334] "Generic (PLEG): container finished" podID="66f04f7e-bee5-4db9-af24-fef76cd579a4" containerID="35f4c055174ae464b8c87324dd32b8f91aad2c998bc4498b628a5af317e6343b" exitCode=0 Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.862342 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82747" event={"ID":"66f04f7e-bee5-4db9-af24-fef76cd579a4","Type":"ContainerDied","Data":"35f4c055174ae464b8c87324dd32b8f91aad2c998bc4498b628a5af317e6343b"} Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.878530 4688 generic.go:334] "Generic (PLEG): container finished" podID="c0e2bac7-43b6-484f-af41-54ebc8205242" containerID="d7538b29e36f37dfd4cd6e91ea157443e3e4d43d69d205c40fdd4c0700bfbbe6" exitCode=0 Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.878626 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0e51-account-create-update-srsmf" event={"ID":"c0e2bac7-43b6-484f-af41-54ebc8205242","Type":"ContainerDied","Data":"d7538b29e36f37dfd4cd6e91ea157443e3e4d43d69d205c40fdd4c0700bfbbe6"} Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.882491 4688 generic.go:334] "Generic (PLEG): container finished" podID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerID="67cd1110801959ec33e93d47acb0ec7095ed8383c356189d5d0b7c26fa3176c9" exitCode=0 Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.884259 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jrlkz" event={"ID":"37ba61eb-0e82-4af5-8756-cc56550dd6ed","Type":"ContainerDied","Data":"67cd1110801959ec33e93d47acb0ec7095ed8383c356189d5d0b7c26fa3176c9"} Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.895217 4688 generic.go:334] "Generic (PLEG): container finished" podID="524e08b9-7bbd-4e77-b8ab-901c43fd8283" containerID="fa77c1486e66af65f8a95b90c550baafcaa3929dee8614248b622c5b45a96fcd" exitCode=0 Jan 23 18:26:10 crc kubenswrapper[4688]: I0123 18:26:10.895371 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0dd-account-create-update-wjhkg" event={"ID":"524e08b9-7bbd-4e77-b8ab-901c43fd8283","Type":"ContainerDied","Data":"fa77c1486e66af65f8a95b90c550baafcaa3929dee8614248b622c5b45a96fcd"} Jan 23 18:26:11 crc kubenswrapper[4688]: I0123 18:26:11.911236 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jrlkz" event={"ID":"37ba61eb-0e82-4af5-8756-cc56550dd6ed","Type":"ContainerStarted","Data":"31502d29ac503f003a0dfb9f7b37d7e2e3fce08fdbbaeaefad880f0af1304fec"} Jan 23 18:26:11 crc kubenswrapper[4688]: I0123 18:26:11.911945 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:11 crc kubenswrapper[4688]: I0123 18:26:11.937297 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-jrlkz" podStartSLOduration=8.937273594 podStartE2EDuration="8.937273594s" podCreationTimestamp="2026-01-23 18:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:26:11.935270156 +0000 UTC m=+1166.931094607" watchObservedRunningTime="2026-01-23 18:26:11.937273594 +0000 UTC m=+1166.933098035" Jan 23 18:26:12 crc kubenswrapper[4688]: I0123 18:26:12.721364 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:12 crc kubenswrapper[4688]: E0123 18:26:12.721537 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 18:26:12 crc kubenswrapper[4688]: E0123 18:26:12.721663 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 18:26:12 crc kubenswrapper[4688]: E0123 18:26:12.721733 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift podName:ccb24002-aac7-4341-b434-58189d7792e5 nodeName:}" failed. No retries permitted until 2026-01-23 18:26:20.721710784 +0000 UTC m=+1175.717535225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift") pod "swift-storage-0" (UID: "ccb24002-aac7-4341-b434-58189d7792e5") : configmap "swift-ring-files" not found Jan 23 18:26:14 crc kubenswrapper[4688]: I0123 18:26:14.941720 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nxsr6" event={"ID":"798fc77a-0ff3-414c-91e1-d747b952faa2","Type":"ContainerDied","Data":"283e0ea4707107826f4984911c6a3295a7419f069b1984f2458b61c9e3d8f4eb"} Jan 23 18:26:14 crc kubenswrapper[4688]: I0123 18:26:14.942268 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="283e0ea4707107826f4984911c6a3295a7419f069b1984f2458b61c9e3d8f4eb" Jan 23 18:26:14 crc kubenswrapper[4688]: I0123 18:26:14.990131 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:15 crc kubenswrapper[4688]: I0123 18:26:15.079532 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798fc77a-0ff3-414c-91e1-d747b952faa2-operator-scripts\") pod \"798fc77a-0ff3-414c-91e1-d747b952faa2\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " Jan 23 18:26:15 crc kubenswrapper[4688]: I0123 18:26:15.080404 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2tkj\" (UniqueName: \"kubernetes.io/projected/798fc77a-0ff3-414c-91e1-d747b952faa2-kube-api-access-l2tkj\") pod \"798fc77a-0ff3-414c-91e1-d747b952faa2\" (UID: \"798fc77a-0ff3-414c-91e1-d747b952faa2\") " Jan 23 18:26:15 crc kubenswrapper[4688]: I0123 18:26:15.080346 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798fc77a-0ff3-414c-91e1-d747b952faa2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "798fc77a-0ff3-414c-91e1-d747b952faa2" (UID: "798fc77a-0ff3-414c-91e1-d747b952faa2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:15 crc kubenswrapper[4688]: I0123 18:26:15.082142 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/798fc77a-0ff3-414c-91e1-d747b952faa2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:15 crc kubenswrapper[4688]: I0123 18:26:15.090788 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798fc77a-0ff3-414c-91e1-d747b952faa2-kube-api-access-l2tkj" (OuterVolumeSpecName: "kube-api-access-l2tkj") pod "798fc77a-0ff3-414c-91e1-d747b952faa2" (UID: "798fc77a-0ff3-414c-91e1-d747b952faa2"). InnerVolumeSpecName "kube-api-access-l2tkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:15 crc kubenswrapper[4688]: I0123 18:26:15.183994 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2tkj\" (UniqueName: \"kubernetes.io/projected/798fc77a-0ff3-414c-91e1-d747b952faa2-kube-api-access-l2tkj\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:15 crc kubenswrapper[4688]: I0123 18:26:15.950321 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nxsr6" Jan 23 18:26:18 crc kubenswrapper[4688]: I0123 18:26:18.921382 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.010601 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cdkkm"] Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.010863 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" podUID="620901ff-feb3-42a3-a332-973147a2b0d3" containerName="dnsmasq-dns" containerID="cri-o://fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c" gracePeriod=10 Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.440839 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.470867 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmp9c\" (UniqueName: \"kubernetes.io/projected/c0e2bac7-43b6-484f-af41-54ebc8205242-kube-api-access-nmp9c\") pod \"c0e2bac7-43b6-484f-af41-54ebc8205242\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.470941 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e2bac7-43b6-484f-af41-54ebc8205242-operator-scripts\") pod \"c0e2bac7-43b6-484f-af41-54ebc8205242\" (UID: \"c0e2bac7-43b6-484f-af41-54ebc8205242\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.471955 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0e2bac7-43b6-484f-af41-54ebc8205242-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0e2bac7-43b6-484f-af41-54ebc8205242" (UID: "c0e2bac7-43b6-484f-af41-54ebc8205242"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.481511 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e2bac7-43b6-484f-af41-54ebc8205242-kube-api-access-nmp9c" (OuterVolumeSpecName: "kube-api-access-nmp9c") pod "c0e2bac7-43b6-484f-af41-54ebc8205242" (UID: "c0e2bac7-43b6-484f-af41-54ebc8205242"). InnerVolumeSpecName "kube-api-access-nmp9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.561098 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.574009 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82747" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.575499 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmp9c\" (UniqueName: \"kubernetes.io/projected/c0e2bac7-43b6-484f-af41-54ebc8205242-kube-api-access-nmp9c\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.575556 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0e2bac7-43b6-484f-af41-54ebc8205242-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.585861 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.677382 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzzgl\" (UniqueName: \"kubernetes.io/projected/66f04f7e-bee5-4db9-af24-fef76cd579a4-kube-api-access-xzzgl\") pod \"66f04f7e-bee5-4db9-af24-fef76cd579a4\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.677735 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/524e08b9-7bbd-4e77-b8ab-901c43fd8283-operator-scripts\") pod \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.677764 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2szwh\" (UniqueName: \"kubernetes.io/projected/524e08b9-7bbd-4e77-b8ab-901c43fd8283-kube-api-access-2szwh\") pod \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\" (UID: \"524e08b9-7bbd-4e77-b8ab-901c43fd8283\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.677812 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66f04f7e-bee5-4db9-af24-fef76cd579a4-operator-scripts\") pod \"66f04f7e-bee5-4db9-af24-fef76cd579a4\" (UID: \"66f04f7e-bee5-4db9-af24-fef76cd579a4\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.677842 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xk7z\" (UniqueName: \"kubernetes.io/projected/5c5e7058-06e1-4c31-b185-61f48f8bd166-kube-api-access-9xk7z\") pod \"5c5e7058-06e1-4c31-b185-61f48f8bd166\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.677996 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5e7058-06e1-4c31-b185-61f48f8bd166-operator-scripts\") pod \"5c5e7058-06e1-4c31-b185-61f48f8bd166\" (UID: \"5c5e7058-06e1-4c31-b185-61f48f8bd166\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.679046 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5e7058-06e1-4c31-b185-61f48f8bd166-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c5e7058-06e1-4c31-b185-61f48f8bd166" (UID: "5c5e7058-06e1-4c31-b185-61f48f8bd166"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.679611 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f04f7e-bee5-4db9-af24-fef76cd579a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66f04f7e-bee5-4db9-af24-fef76cd579a4" (UID: "66f04f7e-bee5-4db9-af24-fef76cd579a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.680285 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/524e08b9-7bbd-4e77-b8ab-901c43fd8283-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "524e08b9-7bbd-4e77-b8ab-901c43fd8283" (UID: "524e08b9-7bbd-4e77-b8ab-901c43fd8283"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.687714 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5e7058-06e1-4c31-b185-61f48f8bd166-kube-api-access-9xk7z" (OuterVolumeSpecName: "kube-api-access-9xk7z") pod "5c5e7058-06e1-4c31-b185-61f48f8bd166" (UID: "5c5e7058-06e1-4c31-b185-61f48f8bd166"). InnerVolumeSpecName "kube-api-access-9xk7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.687914 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f04f7e-bee5-4db9-af24-fef76cd579a4-kube-api-access-xzzgl" (OuterVolumeSpecName: "kube-api-access-xzzgl") pod "66f04f7e-bee5-4db9-af24-fef76cd579a4" (UID: "66f04f7e-bee5-4db9-af24-fef76cd579a4"). InnerVolumeSpecName "kube-api-access-xzzgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.699983 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524e08b9-7bbd-4e77-b8ab-901c43fd8283-kube-api-access-2szwh" (OuterVolumeSpecName: "kube-api-access-2szwh") pod "524e08b9-7bbd-4e77-b8ab-901c43fd8283" (UID: "524e08b9-7bbd-4e77-b8ab-901c43fd8283"). InnerVolumeSpecName "kube-api-access-2szwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.780616 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzzgl\" (UniqueName: \"kubernetes.io/projected/66f04f7e-bee5-4db9-af24-fef76cd579a4-kube-api-access-xzzgl\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.780649 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/524e08b9-7bbd-4e77-b8ab-901c43fd8283-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.780663 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2szwh\" (UniqueName: \"kubernetes.io/projected/524e08b9-7bbd-4e77-b8ab-901c43fd8283-kube-api-access-2szwh\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.780675 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66f04f7e-bee5-4db9-af24-fef76cd579a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.780687 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xk7z\" (UniqueName: \"kubernetes.io/projected/5c5e7058-06e1-4c31-b185-61f48f8bd166-kube-api-access-9xk7z\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.780702 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c5e7058-06e1-4c31-b185-61f48f8bd166-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.885703 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.984356 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdh6f\" (UniqueName: \"kubernetes.io/projected/620901ff-feb3-42a3-a332-973147a2b0d3-kube-api-access-gdh6f\") pod \"620901ff-feb3-42a3-a332-973147a2b0d3\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.984517 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-config\") pod \"620901ff-feb3-42a3-a332-973147a2b0d3\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.984596 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-dns-svc\") pod \"620901ff-feb3-42a3-a332-973147a2b0d3\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.984715 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-sb\") pod \"620901ff-feb3-42a3-a332-973147a2b0d3\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.984854 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-nb\") pod \"620901ff-feb3-42a3-a332-973147a2b0d3\" (UID: \"620901ff-feb3-42a3-a332-973147a2b0d3\") " Jan 23 18:26:19 crc kubenswrapper[4688]: I0123 18:26:19.991129 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620901ff-feb3-42a3-a332-973147a2b0d3-kube-api-access-gdh6f" (OuterVolumeSpecName: "kube-api-access-gdh6f") pod "620901ff-feb3-42a3-a332-973147a2b0d3" (UID: "620901ff-feb3-42a3-a332-973147a2b0d3"). InnerVolumeSpecName "kube-api-access-gdh6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.002987 4688 generic.go:334] "Generic (PLEG): container finished" podID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerID="c204bcdba9565296476b3294dc89caf2f775ae30d177f4c16ab8aff9f9b3c995" exitCode=0 Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.003077 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5bf89cbd-9a52-45b0-8e35-1e070a678aea","Type":"ContainerDied","Data":"c204bcdba9565296476b3294dc89caf2f775ae30d177f4c16ab8aff9f9b3c995"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.007605 4688 generic.go:334] "Generic (PLEG): container finished" podID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerID="452d44893c7bbd93eddc82ee7c1bbc84b3793989e71172184890ef83a205acd3" exitCode=0 Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.007760 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d36723-6a61-470a-9107-e5e8cf1c49a0","Type":"ContainerDied","Data":"452d44893c7bbd93eddc82ee7c1bbc84b3793989e71172184890ef83a205acd3"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.013803 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-0e51-account-create-update-srsmf" event={"ID":"c0e2bac7-43b6-484f-af41-54ebc8205242","Type":"ContainerDied","Data":"2d6e2872bde6e246459ae850609efc007b0fa10b4fd82b75e7ce276798732b57"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.013866 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d6e2872bde6e246459ae850609efc007b0fa10b4fd82b75e7ce276798732b57" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.013956 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-0e51-account-create-update-srsmf" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.017719 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-xrfv5" event={"ID":"5c5e7058-06e1-4c31-b185-61f48f8bd166","Type":"ContainerDied","Data":"213dfce23d9aa13d7cf4957e08fa100e24e962c9d11500a1a621c61a6d464ae6"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.017775 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="213dfce23d9aa13d7cf4957e08fa100e24e962c9d11500a1a621c61a6d464ae6" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.017869 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-xrfv5" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.021560 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-82747" event={"ID":"66f04f7e-bee5-4db9-af24-fef76cd579a4","Type":"ContainerDied","Data":"c644009ead1237a0bc14f225a802e371d04aaf09cf1a71bbfcc57790ba53be34"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.021633 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c644009ead1237a0bc14f225a802e371d04aaf09cf1a71bbfcc57790ba53be34" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.021748 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-82747" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.029306 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vr6nh" event={"ID":"d7367189-3db1-4176-8281-2b50a8b3df49","Type":"ContainerStarted","Data":"d5a16332c054d820891a752746ccce0535fde74abb58cadf442efd3cd2c50ffd"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.033573 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e0dd-account-create-update-wjhkg" event={"ID":"524e08b9-7bbd-4e77-b8ab-901c43fd8283","Type":"ContainerDied","Data":"29759c61f33a20f3107e90f7970ffa6c72aaae01e6ec85bd091c1ac37213a71c"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.033621 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29759c61f33a20f3107e90f7970ffa6c72aaae01e6ec85bd091c1ac37213a71c" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.033686 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e0dd-account-create-update-wjhkg" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.043332 4688 generic.go:334] "Generic (PLEG): container finished" podID="620901ff-feb3-42a3-a332-973147a2b0d3" containerID="fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c" exitCode=0 Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.043415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" event={"ID":"620901ff-feb3-42a3-a332-973147a2b0d3","Type":"ContainerDied","Data":"fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.043446 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" event={"ID":"620901ff-feb3-42a3-a332-973147a2b0d3","Type":"ContainerDied","Data":"a870d9fa646ef76b2f79b544a9a5bd374dc41c02b37e427da357462023664c98"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.043464 4688 scope.go:117] "RemoveContainer" containerID="fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.043624 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-cdkkm" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.043794 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "620901ff-feb3-42a3-a332-973147a2b0d3" (UID: "620901ff-feb3-42a3-a332-973147a2b0d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.049812 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerStarted","Data":"99dfee13818eeaf2e6cd24ad65f0d7e058ac5f45387bec4a533060df18e182a1"} Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.056459 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-config" (OuterVolumeSpecName: "config") pod "620901ff-feb3-42a3-a332-973147a2b0d3" (UID: "620901ff-feb3-42a3-a332-973147a2b0d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.069811 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "620901ff-feb3-42a3-a332-973147a2b0d3" (UID: "620901ff-feb3-42a3-a332-973147a2b0d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.074964 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "620901ff-feb3-42a3-a332-973147a2b0d3" (UID: "620901ff-feb3-42a3-a332-973147a2b0d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.083699 4688 scope.go:117] "RemoveContainer" containerID="335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.087557 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.088003 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.089829 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdh6f\" (UniqueName: \"kubernetes.io/projected/620901ff-feb3-42a3-a332-973147a2b0d3-kube-api-access-gdh6f\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.089863 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.089879 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/620901ff-feb3-42a3-a332-973147a2b0d3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.105710 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vr6nh" podStartSLOduration=1.819408151 podStartE2EDuration="12.105684604s" podCreationTimestamp="2026-01-23 18:26:08 +0000 UTC" firstStartedPulling="2026-01-23 18:26:09.281146654 +0000 UTC m=+1164.276971095" lastFinishedPulling="2026-01-23 18:26:19.567423107 +0000 UTC m=+1174.563247548" observedRunningTime="2026-01-23 18:26:20.093744053 +0000 UTC m=+1175.089568524" watchObservedRunningTime="2026-01-23 18:26:20.105684604 +0000 UTC m=+1175.101509035" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.147588 4688 scope.go:117] "RemoveContainer" containerID="fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.148303 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c\": container with ID starting with fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c not found: ID does not exist" containerID="fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.148345 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c"} err="failed to get container status \"fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c\": rpc error: code = NotFound desc = could not find container \"fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c\": container with ID starting with fef63b0f1a6f25e444bb5e17a60e0f2c1613fe9317dbfa2f0219cadd72d7496c not found: ID does not exist" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.148366 4688 scope.go:117] "RemoveContainer" containerID="335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.148686 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3\": container with ID starting with 335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3 not found: ID does not exist" containerID="335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.148741 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3"} err="failed to get container status \"335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3\": rpc error: code = NotFound desc = could not find container \"335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3\": container with ID starting with 335be44dd1aab3bb99b9ebcf91b0f51138db754e839b91cc41bbba4d00ae76b3 not found: ID does not exist" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.164392 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.097351307 podStartE2EDuration="1m17.164355389s" podCreationTimestamp="2026-01-23 18:25:03 +0000 UTC" firstStartedPulling="2026-01-23 18:25:05.509603595 +0000 UTC m=+1100.505428036" lastFinishedPulling="2026-01-23 18:26:19.576607677 +0000 UTC m=+1174.572432118" observedRunningTime="2026-01-23 18:26:20.160744343 +0000 UTC m=+1175.156568784" watchObservedRunningTime="2026-01-23 18:26:20.164355389 +0000 UTC m=+1175.160179830" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.389165 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cdkkm"] Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.399391 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-cdkkm"] Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.700400 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zl7mq" podUID="c58b6a90-e622-44bd-824a-7bc35f16190e" containerName="ovn-controller" probeResult="failure" output=< Jan 23 18:26:20 crc kubenswrapper[4688]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 18:26:20 crc kubenswrapper[4688]: > Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.737361 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.760991 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rjmgm" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.805131 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.805413 4688 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.805434 4688 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.805490 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift podName:ccb24002-aac7-4341-b434-58189d7792e5 nodeName:}" failed. No retries permitted until 2026-01-23 18:26:36.80547062 +0000 UTC m=+1191.801295061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift") pod "swift-storage-0" (UID: "ccb24002-aac7-4341-b434-58189d7792e5") : configmap "swift-ring-files" not found Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.995454 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zl7mq-config-cvnpr"] Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.995971 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e2bac7-43b6-484f-af41-54ebc8205242" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.995988 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e2bac7-43b6-484f-af41-54ebc8205242" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.996006 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5e7058-06e1-4c31-b185-61f48f8bd166" containerName="mariadb-database-create" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996015 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5e7058-06e1-4c31-b185-61f48f8bd166" containerName="mariadb-database-create" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.996026 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798fc77a-0ff3-414c-91e1-d747b952faa2" containerName="mariadb-database-create" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996034 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="798fc77a-0ff3-414c-91e1-d747b952faa2" containerName="mariadb-database-create" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.996053 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f04f7e-bee5-4db9-af24-fef76cd579a4" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996062 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f04f7e-bee5-4db9-af24-fef76cd579a4" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.996082 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620901ff-feb3-42a3-a332-973147a2b0d3" containerName="init" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996091 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="620901ff-feb3-42a3-a332-973147a2b0d3" containerName="init" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.996117 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524e08b9-7bbd-4e77-b8ab-901c43fd8283" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996125 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="524e08b9-7bbd-4e77-b8ab-901c43fd8283" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: E0123 18:26:20.996141 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620901ff-feb3-42a3-a332-973147a2b0d3" containerName="dnsmasq-dns" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996148 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="620901ff-feb3-42a3-a332-973147a2b0d3" containerName="dnsmasq-dns" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996396 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="798fc77a-0ff3-414c-91e1-d747b952faa2" containerName="mariadb-database-create" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996411 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e2bac7-43b6-484f-af41-54ebc8205242" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996422 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="524e08b9-7bbd-4e77-b8ab-901c43fd8283" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996436 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="620901ff-feb3-42a3-a332-973147a2b0d3" containerName="dnsmasq-dns" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996446 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f04f7e-bee5-4db9-af24-fef76cd579a4" containerName="mariadb-account-create-update" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.996457 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5e7058-06e1-4c31-b185-61f48f8bd166" containerName="mariadb-database-create" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.997230 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:20 crc kubenswrapper[4688]: I0123 18:26:20.999947 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.009856 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-additional-scripts\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.009922 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-scripts\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.009971 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnqs\" (UniqueName: \"kubernetes.io/projected/c04b53a0-df97-4063-b30d-11850ec1358b-kube-api-access-2gnqs\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.010034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-log-ovn\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.010136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run-ovn\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.010206 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.013918 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zl7mq-config-cvnpr"] Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.061639 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d36723-6a61-470a-9107-e5e8cf1c49a0","Type":"ContainerStarted","Data":"2c3e96b5f5164328bdb03f50cbc2bbda53492fe97ee3c38f936389940e89ec51"} Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.063088 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.066051 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5bf89cbd-9a52-45b0-8e35-1e070a678aea","Type":"ContainerStarted","Data":"2f30879cdd11b516d8167680b37abb43efca558cb0f015fe16164231564e96ef"} Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.066928 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112083 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-additional-scripts\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112166 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-scripts\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112221 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnqs\" (UniqueName: \"kubernetes.io/projected/c04b53a0-df97-4063-b30d-11850ec1358b-kube-api-access-2gnqs\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112296 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-log-ovn\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112574 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run-ovn\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-log-ovn\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112780 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run-ovn\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112903 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.112955 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-additional-scripts\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.114647 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-scripts\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.141581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnqs\" (UniqueName: \"kubernetes.io/projected/c04b53a0-df97-4063-b30d-11850ec1358b-kube-api-access-2gnqs\") pod \"ovn-controller-zl7mq-config-cvnpr\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.144366 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371950.710445 podStartE2EDuration="1m26.144329684s" podCreationTimestamp="2026-01-23 18:24:55 +0000 UTC" firstStartedPulling="2026-01-23 18:24:58.423374292 +0000 UTC m=+1093.419198733" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:26:21.107949064 +0000 UTC m=+1176.103773505" watchObservedRunningTime="2026-01-23 18:26:21.144329684 +0000 UTC m=+1176.140154125" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.144568 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.314248399 podStartE2EDuration="1m25.144563771s" podCreationTimestamp="2026-01-23 18:24:56 +0000 UTC" firstStartedPulling="2026-01-23 18:24:59.053598806 +0000 UTC m=+1094.049423247" lastFinishedPulling="2026-01-23 18:25:37.883914178 +0000 UTC m=+1132.879738619" observedRunningTime="2026-01-23 18:26:21.131054673 +0000 UTC m=+1176.126879124" watchObservedRunningTime="2026-01-23 18:26:21.144563771 +0000 UTC m=+1176.140388212" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.315675 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.368661 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620901ff-feb3-42a3-a332-973147a2b0d3" path="/var/lib/kubelet/pods/620901ff-feb3-42a3-a332-973147a2b0d3/volumes" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.568327 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wcz56"] Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.570062 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.577601 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wjpvx" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.586127 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.592524 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wcz56"] Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.733237 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-config-data\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.733436 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-db-sync-config-data\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.733563 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8pxj\" (UniqueName: \"kubernetes.io/projected/620ac0a5-247a-4207-83e0-d6776834d4ad-kube-api-access-x8pxj\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.733635 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-combined-ca-bundle\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.835587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-config-data\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.835723 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-db-sync-config-data\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.835828 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8pxj\" (UniqueName: \"kubernetes.io/projected/620ac0a5-247a-4207-83e0-d6776834d4ad-kube-api-access-x8pxj\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.835883 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-combined-ca-bundle\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.843027 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-config-data\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.846719 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-db-sync-config-data\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.846880 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-combined-ca-bundle\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.857791 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8pxj\" (UniqueName: \"kubernetes.io/projected/620ac0a5-247a-4207-83e0-d6776834d4ad-kube-api-access-x8pxj\") pod \"glance-db-sync-wcz56\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.870201 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zl7mq-config-cvnpr"] Jan 23 18:26:21 crc kubenswrapper[4688]: W0123 18:26:21.885920 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc04b53a0_df97_4063_b30d_11850ec1358b.slice/crio-1811bb84bfc738191488a076dddcdc8f36e4904e4012ade1a6dcb17ef1b725cf WatchSource:0}: Error finding container 1811bb84bfc738191488a076dddcdc8f36e4904e4012ade1a6dcb17ef1b725cf: Status 404 returned error can't find the container with id 1811bb84bfc738191488a076dddcdc8f36e4904e4012ade1a6dcb17ef1b725cf Jan 23 18:26:21 crc kubenswrapper[4688]: I0123 18:26:21.900568 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wcz56" Jan 23 18:26:22 crc kubenswrapper[4688]: I0123 18:26:22.092524 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq-config-cvnpr" event={"ID":"c04b53a0-df97-4063-b30d-11850ec1358b","Type":"ContainerStarted","Data":"1811bb84bfc738191488a076dddcdc8f36e4904e4012ade1a6dcb17ef1b725cf"} Jan 23 18:26:22 crc kubenswrapper[4688]: I0123 18:26:22.758527 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wcz56"] Jan 23 18:26:23 crc kubenswrapper[4688]: I0123 18:26:23.104551 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wcz56" event={"ID":"620ac0a5-247a-4207-83e0-d6776834d4ad","Type":"ContainerStarted","Data":"31a71f3c92b29426a5957a90cb6802f908bad2969071acb6a4612f1505185f73"} Jan 23 18:26:23 crc kubenswrapper[4688]: I0123 18:26:23.107911 4688 generic.go:334] "Generic (PLEG): container finished" podID="c04b53a0-df97-4063-b30d-11850ec1358b" containerID="b589c945ffaa251f6676c52288282d5b4bc90e25dc3ac88c99b948f829fbf8b9" exitCode=0 Jan 23 18:26:23 crc kubenswrapper[4688]: I0123 18:26:23.107977 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq-config-cvnpr" event={"ID":"c04b53a0-df97-4063-b30d-11850ec1358b","Type":"ContainerDied","Data":"b589c945ffaa251f6676c52288282d5b4bc90e25dc3ac88c99b948f829fbf8b9"} Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.552108 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.609222 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-scripts\") pod \"c04b53a0-df97-4063-b30d-11850ec1358b\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.609289 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gnqs\" (UniqueName: \"kubernetes.io/projected/c04b53a0-df97-4063-b30d-11850ec1358b-kube-api-access-2gnqs\") pod \"c04b53a0-df97-4063-b30d-11850ec1358b\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.609412 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run-ovn\") pod \"c04b53a0-df97-4063-b30d-11850ec1358b\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.609441 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-log-ovn\") pod \"c04b53a0-df97-4063-b30d-11850ec1358b\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.609528 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run\") pod \"c04b53a0-df97-4063-b30d-11850ec1358b\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.609546 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-additional-scripts\") pod \"c04b53a0-df97-4063-b30d-11850ec1358b\" (UID: \"c04b53a0-df97-4063-b30d-11850ec1358b\") " Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.610570 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c04b53a0-df97-4063-b30d-11850ec1358b" (UID: "c04b53a0-df97-4063-b30d-11850ec1358b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.610893 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-scripts" (OuterVolumeSpecName: "scripts") pod "c04b53a0-df97-4063-b30d-11850ec1358b" (UID: "c04b53a0-df97-4063-b30d-11850ec1358b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.610981 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c04b53a0-df97-4063-b30d-11850ec1358b" (UID: "c04b53a0-df97-4063-b30d-11850ec1358b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.611017 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c04b53a0-df97-4063-b30d-11850ec1358b" (UID: "c04b53a0-df97-4063-b30d-11850ec1358b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.611048 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run" (OuterVolumeSpecName: "var-run") pod "c04b53a0-df97-4063-b30d-11850ec1358b" (UID: "c04b53a0-df97-4063-b30d-11850ec1358b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.625381 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04b53a0-df97-4063-b30d-11850ec1358b-kube-api-access-2gnqs" (OuterVolumeSpecName: "kube-api-access-2gnqs") pod "c04b53a0-df97-4063-b30d-11850ec1358b" (UID: "c04b53a0-df97-4063-b30d-11850ec1358b"). InnerVolumeSpecName "kube-api-access-2gnqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.626214 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.712702 4688 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.712743 4688 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.712755 4688 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c04b53a0-df97-4063-b30d-11850ec1358b-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.712766 4688 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.712778 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c04b53a0-df97-4063-b30d-11850ec1358b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:24 crc kubenswrapper[4688]: I0123 18:26:24.712789 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gnqs\" (UniqueName: \"kubernetes.io/projected/c04b53a0-df97-4063-b30d-11850ec1358b-kube-api-access-2gnqs\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.135699 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq-config-cvnpr" event={"ID":"c04b53a0-df97-4063-b30d-11850ec1358b","Type":"ContainerDied","Data":"1811bb84bfc738191488a076dddcdc8f36e4904e4012ade1a6dcb17ef1b725cf"} Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.136087 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1811bb84bfc738191488a076dddcdc8f36e4904e4012ade1a6dcb17ef1b725cf" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.136282 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-cvnpr" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.700362 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zl7mq" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.817001 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zl7mq-config-cvnpr"] Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.827117 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zl7mq-config-cvnpr"] Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.923318 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zl7mq-config-bgwvc"] Jan 23 18:26:25 crc kubenswrapper[4688]: E0123 18:26:25.923754 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04b53a0-df97-4063-b30d-11850ec1358b" containerName="ovn-config" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.923772 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04b53a0-df97-4063-b30d-11850ec1358b" containerName="ovn-config" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.923982 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04b53a0-df97-4063-b30d-11850ec1358b" containerName="ovn-config" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.924748 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.929939 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 18:26:25 crc kubenswrapper[4688]: I0123 18:26:25.941981 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zl7mq-config-bgwvc"] Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.045249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-scripts\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.045671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-log-ovn\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.045745 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6qcl\" (UniqueName: \"kubernetes.io/projected/f70f2341-12a9-4039-a78d-127fc268daaf-kube-api-access-d6qcl\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.046026 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-additional-scripts\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.046102 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run-ovn\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.046224 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.148995 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run-ovn\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.149110 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.149262 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-scripts\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.149362 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-log-ovn\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.149407 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6qcl\" (UniqueName: \"kubernetes.io/projected/f70f2341-12a9-4039-a78d-127fc268daaf-kube-api-access-d6qcl\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.149438 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run-ovn\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.149490 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-additional-scripts\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.150482 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-additional-scripts\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.150621 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-log-ovn\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.151002 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.154610 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-scripts\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.185263 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6qcl\" (UniqueName: \"kubernetes.io/projected/f70f2341-12a9-4039-a78d-127fc268daaf-kube-api-access-d6qcl\") pod \"ovn-controller-zl7mq-config-bgwvc\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.243554 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:26 crc kubenswrapper[4688]: I0123 18:26:26.783377 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zl7mq-config-bgwvc"] Jan 23 18:26:27 crc kubenswrapper[4688]: I0123 18:26:27.170856 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq-config-bgwvc" event={"ID":"f70f2341-12a9-4039-a78d-127fc268daaf","Type":"ContainerStarted","Data":"a74c037458e9a2b6e7483c6178d34634e1389c2d60356d719833426bd8ed13f1"} Jan 23 18:26:27 crc kubenswrapper[4688]: I0123 18:26:27.373848 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04b53a0-df97-4063-b30d-11850ec1358b" path="/var/lib/kubelet/pods/c04b53a0-df97-4063-b30d-11850ec1358b/volumes" Jan 23 18:26:28 crc kubenswrapper[4688]: I0123 18:26:28.182773 4688 generic.go:334] "Generic (PLEG): container finished" podID="f70f2341-12a9-4039-a78d-127fc268daaf" containerID="5f77ad78e6807968354ea2c8e95205a19a403f6a80b3ec8d3ab42a3b5e57f882" exitCode=0 Jan 23 18:26:28 crc kubenswrapper[4688]: I0123 18:26:28.182887 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq-config-bgwvc" event={"ID":"f70f2341-12a9-4039-a78d-127fc268daaf","Type":"ContainerDied","Data":"5f77ad78e6807968354ea2c8e95205a19a403f6a80b3ec8d3ab42a3b5e57f882"} Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.195881 4688 generic.go:334] "Generic (PLEG): container finished" podID="d7367189-3db1-4176-8281-2b50a8b3df49" containerID="d5a16332c054d820891a752746ccce0535fde74abb58cadf442efd3cd2c50ffd" exitCode=0 Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.196171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vr6nh" event={"ID":"d7367189-3db1-4176-8281-2b50a8b3df49","Type":"ContainerDied","Data":"d5a16332c054d820891a752746ccce0535fde74abb58cadf442efd3cd2c50ffd"} Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.594895 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644491 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-log-ovn\") pod \"f70f2341-12a9-4039-a78d-127fc268daaf\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644552 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run-ovn\") pod \"f70f2341-12a9-4039-a78d-127fc268daaf\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644656 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f70f2341-12a9-4039-a78d-127fc268daaf" (UID: "f70f2341-12a9-4039-a78d-127fc268daaf"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644717 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-additional-scripts\") pod \"f70f2341-12a9-4039-a78d-127fc268daaf\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644729 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f70f2341-12a9-4039-a78d-127fc268daaf" (UID: "f70f2341-12a9-4039-a78d-127fc268daaf"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644810 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run\") pod \"f70f2341-12a9-4039-a78d-127fc268daaf\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644880 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qcl\" (UniqueName: \"kubernetes.io/projected/f70f2341-12a9-4039-a78d-127fc268daaf-kube-api-access-d6qcl\") pod \"f70f2341-12a9-4039-a78d-127fc268daaf\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644911 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-scripts\") pod \"f70f2341-12a9-4039-a78d-127fc268daaf\" (UID: \"f70f2341-12a9-4039-a78d-127fc268daaf\") " Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.644955 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run" (OuterVolumeSpecName: "var-run") pod "f70f2341-12a9-4039-a78d-127fc268daaf" (UID: "f70f2341-12a9-4039-a78d-127fc268daaf"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.645530 4688 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.645554 4688 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.645564 4688 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f70f2341-12a9-4039-a78d-127fc268daaf-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.645644 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f70f2341-12a9-4039-a78d-127fc268daaf" (UID: "f70f2341-12a9-4039-a78d-127fc268daaf"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.646515 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-scripts" (OuterVolumeSpecName: "scripts") pod "f70f2341-12a9-4039-a78d-127fc268daaf" (UID: "f70f2341-12a9-4039-a78d-127fc268daaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.671116 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70f2341-12a9-4039-a78d-127fc268daaf-kube-api-access-d6qcl" (OuterVolumeSpecName: "kube-api-access-d6qcl") pod "f70f2341-12a9-4039-a78d-127fc268daaf" (UID: "f70f2341-12a9-4039-a78d-127fc268daaf"). InnerVolumeSpecName "kube-api-access-d6qcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.747259 4688 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.747295 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qcl\" (UniqueName: \"kubernetes.io/projected/f70f2341-12a9-4039-a78d-127fc268daaf-kube-api-access-d6qcl\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:29 crc kubenswrapper[4688]: I0123 18:26:29.747306 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f70f2341-12a9-4039-a78d-127fc268daaf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.207049 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zl7mq-config-bgwvc" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.207543 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zl7mq-config-bgwvc" event={"ID":"f70f2341-12a9-4039-a78d-127fc268daaf","Type":"ContainerDied","Data":"a74c037458e9a2b6e7483c6178d34634e1389c2d60356d719833426bd8ed13f1"} Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.207618 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74c037458e9a2b6e7483c6178d34634e1389c2d60356d719833426bd8ed13f1" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.602824 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.670923 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwdlm\" (UniqueName: \"kubernetes.io/projected/d7367189-3db1-4176-8281-2b50a8b3df49-kube-api-access-gwdlm\") pod \"d7367189-3db1-4176-8281-2b50a8b3df49\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.671240 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-scripts\") pod \"d7367189-3db1-4176-8281-2b50a8b3df49\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.671461 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7367189-3db1-4176-8281-2b50a8b3df49-etc-swift\") pod \"d7367189-3db1-4176-8281-2b50a8b3df49\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.671569 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-combined-ca-bundle\") pod \"d7367189-3db1-4176-8281-2b50a8b3df49\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.671744 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-swiftconf\") pod \"d7367189-3db1-4176-8281-2b50a8b3df49\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.671862 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-dispersionconf\") pod \"d7367189-3db1-4176-8281-2b50a8b3df49\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.671968 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-ring-data-devices\") pod \"d7367189-3db1-4176-8281-2b50a8b3df49\" (UID: \"d7367189-3db1-4176-8281-2b50a8b3df49\") " Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.673407 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d7367189-3db1-4176-8281-2b50a8b3df49" (UID: "d7367189-3db1-4176-8281-2b50a8b3df49"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.675845 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7367189-3db1-4176-8281-2b50a8b3df49-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d7367189-3db1-4176-8281-2b50a8b3df49" (UID: "d7367189-3db1-4176-8281-2b50a8b3df49"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.685125 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7367189-3db1-4176-8281-2b50a8b3df49-kube-api-access-gwdlm" (OuterVolumeSpecName: "kube-api-access-gwdlm") pod "d7367189-3db1-4176-8281-2b50a8b3df49" (UID: "d7367189-3db1-4176-8281-2b50a8b3df49"). InnerVolumeSpecName "kube-api-access-gwdlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.713069 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d7367189-3db1-4176-8281-2b50a8b3df49" (UID: "d7367189-3db1-4176-8281-2b50a8b3df49"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.727929 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7367189-3db1-4176-8281-2b50a8b3df49" (UID: "d7367189-3db1-4176-8281-2b50a8b3df49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.729282 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zl7mq-config-bgwvc"] Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.737394 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zl7mq-config-bgwvc"] Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.749139 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-scripts" (OuterVolumeSpecName: "scripts") pod "d7367189-3db1-4176-8281-2b50a8b3df49" (UID: "d7367189-3db1-4176-8281-2b50a8b3df49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.763288 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d7367189-3db1-4176-8281-2b50a8b3df49" (UID: "d7367189-3db1-4176-8281-2b50a8b3df49"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.775939 4688 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.776334 4688 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.776498 4688 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.776666 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwdlm\" (UniqueName: \"kubernetes.io/projected/d7367189-3db1-4176-8281-2b50a8b3df49-kube-api-access-gwdlm\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.776816 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7367189-3db1-4176-8281-2b50a8b3df49-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.776912 4688 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d7367189-3db1-4176-8281-2b50a8b3df49-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:30 crc kubenswrapper[4688]: I0123 18:26:30.777035 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7367189-3db1-4176-8281-2b50a8b3df49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:31 crc kubenswrapper[4688]: I0123 18:26:31.221876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vr6nh" event={"ID":"d7367189-3db1-4176-8281-2b50a8b3df49","Type":"ContainerDied","Data":"f63a59f9524b600530e3f040dd270c6c87831dc7534cf56d276339794db66347"} Jan 23 18:26:31 crc kubenswrapper[4688]: I0123 18:26:31.221922 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63a59f9524b600530e3f040dd270c6c87831dc7534cf56d276339794db66347" Jan 23 18:26:31 crc kubenswrapper[4688]: I0123 18:26:31.221998 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vr6nh" Jan 23 18:26:31 crc kubenswrapper[4688]: I0123 18:26:31.372080 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f70f2341-12a9-4039-a78d-127fc268daaf" path="/var/lib/kubelet/pods/f70f2341-12a9-4039-a78d-127fc268daaf/volumes" Jan 23 18:26:34 crc kubenswrapper[4688]: I0123 18:26:34.626630 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:34 crc kubenswrapper[4688]: I0123 18:26:34.628945 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:35 crc kubenswrapper[4688]: I0123 18:26:35.270501 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:36 crc kubenswrapper[4688]: I0123 18:26:36.855373 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:36 crc kubenswrapper[4688]: I0123 18:26:36.869516 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb24002-aac7-4341-b434-58189d7792e5-etc-swift\") pod \"swift-storage-0\" (UID: \"ccb24002-aac7-4341-b434-58189d7792e5\") " pod="openstack/swift-storage-0" Jan 23 18:26:36 crc kubenswrapper[4688]: I0123 18:26:36.941147 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 18:26:37 crc kubenswrapper[4688]: I0123 18:26:37.616430 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.097446 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.244223 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-94mh9"] Jan 23 18:26:38 crc kubenswrapper[4688]: E0123 18:26:38.245090 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7367189-3db1-4176-8281-2b50a8b3df49" containerName="swift-ring-rebalance" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.245110 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7367189-3db1-4176-8281-2b50a8b3df49" containerName="swift-ring-rebalance" Jan 23 18:26:38 crc kubenswrapper[4688]: E0123 18:26:38.245140 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70f2341-12a9-4039-a78d-127fc268daaf" containerName="ovn-config" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.245147 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70f2341-12a9-4039-a78d-127fc268daaf" containerName="ovn-config" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.245377 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70f2341-12a9-4039-a78d-127fc268daaf" containerName="ovn-config" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.245400 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7367189-3db1-4176-8281-2b50a8b3df49" containerName="swift-ring-rebalance" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.252413 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.277222 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-jznqw" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.285755 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.294692 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-combined-ca-bundle\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.295037 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jggfx\" (UniqueName: \"kubernetes.io/projected/ea982eec-acb6-45c7-8f69-36df2323747c-kube-api-access-jggfx\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.295372 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-db-sync-config-data\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.295658 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-config-data\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.319929 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-zlf47"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.321879 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.346811 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-94mh9"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.387178 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zlf47"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.399701 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbrlb\" (UniqueName: \"kubernetes.io/projected/265a42d2-70db-43df-a5bf-99a70bfed1cb-kube-api-access-tbrlb\") pod \"cinder-db-create-zlf47\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.400117 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-combined-ca-bundle\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.401368 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jggfx\" (UniqueName: \"kubernetes.io/projected/ea982eec-acb6-45c7-8f69-36df2323747c-kube-api-access-jggfx\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.401627 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-db-sync-config-data\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.401908 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265a42d2-70db-43df-a5bf-99a70bfed1cb-operator-scripts\") pod \"cinder-db-create-zlf47\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.402104 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-config-data\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.413812 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-db-sync-config-data\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.418929 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-config-data\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.432998 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-combined-ca-bundle\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.460412 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a15b-account-create-update-cgc5k"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.462410 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.472410 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.488873 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.489254 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="prometheus" containerID="cri-o://f4561b60e502bb26c5ab460caab8790f517f32e08bc50237ddc636327e42e1ed" gracePeriod=600 Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.489357 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="thanos-sidecar" containerID="cri-o://99dfee13818eeaf2e6cd24ad65f0d7e058ac5f45387bec4a533060df18e182a1" gracePeriod=600 Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.489413 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="config-reloader" containerID="cri-o://a091e31cb13c10ff1ffc9f8d03db944ee73b04eb586f12292a67f2b8702d9629" gracePeriod=600 Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.505553 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265a42d2-70db-43df-a5bf-99a70bfed1cb-operator-scripts\") pod \"cinder-db-create-zlf47\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.505721 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbrlb\" (UniqueName: \"kubernetes.io/projected/265a42d2-70db-43df-a5bf-99a70bfed1cb-kube-api-access-tbrlb\") pod \"cinder-db-create-zlf47\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.506058 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jggfx\" (UniqueName: \"kubernetes.io/projected/ea982eec-acb6-45c7-8f69-36df2323747c-kube-api-access-jggfx\") pod \"watcher-db-sync-94mh9\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.507514 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265a42d2-70db-43df-a5bf-99a70bfed1cb-operator-scripts\") pod \"cinder-db-create-zlf47\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.528425 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2wc55"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.530235 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.539084 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a15b-account-create-update-cgc5k"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.582935 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-94mh9" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.583157 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbrlb\" (UniqueName: \"kubernetes.io/projected/265a42d2-70db-43df-a5bf-99a70bfed1cb-kube-api-access-tbrlb\") pod \"cinder-db-create-zlf47\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.596094 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2wc55"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.607989 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfck5\" (UniqueName: \"kubernetes.io/projected/e56e3474-2934-4305-8ebf-353db7dbc00a-kube-api-access-wfck5\") pod \"cinder-a15b-account-create-update-cgc5k\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.608522 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ef29235-1e3f-4732-9770-24cf93856028-operator-scripts\") pod \"barbican-db-create-2wc55\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.608888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhh5m\" (UniqueName: \"kubernetes.io/projected/8ef29235-1e3f-4732-9770-24cf93856028-kube-api-access-lhh5m\") pod \"barbican-db-create-2wc55\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.609091 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56e3474-2934-4305-8ebf-353db7dbc00a-operator-scripts\") pod \"cinder-a15b-account-create-update-cgc5k\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.659957 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.710559 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56e3474-2934-4305-8ebf-353db7dbc00a-operator-scripts\") pod \"cinder-a15b-account-create-update-cgc5k\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.710658 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfck5\" (UniqueName: \"kubernetes.io/projected/e56e3474-2934-4305-8ebf-353db7dbc00a-kube-api-access-wfck5\") pod \"cinder-a15b-account-create-update-cgc5k\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.710689 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ef29235-1e3f-4732-9770-24cf93856028-operator-scripts\") pod \"barbican-db-create-2wc55\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.710842 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhh5m\" (UniqueName: \"kubernetes.io/projected/8ef29235-1e3f-4732-9770-24cf93856028-kube-api-access-lhh5m\") pod \"barbican-db-create-2wc55\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.712000 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ef29235-1e3f-4732-9770-24cf93856028-operator-scripts\") pod \"barbican-db-create-2wc55\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.712465 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56e3474-2934-4305-8ebf-353db7dbc00a-operator-scripts\") pod \"cinder-a15b-account-create-update-cgc5k\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.733991 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfck5\" (UniqueName: \"kubernetes.io/projected/e56e3474-2934-4305-8ebf-353db7dbc00a-kube-api-access-wfck5\") pod \"cinder-a15b-account-create-update-cgc5k\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.749931 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhh5m\" (UniqueName: \"kubernetes.io/projected/8ef29235-1e3f-4732-9770-24cf93856028-kube-api-access-lhh5m\") pod \"barbican-db-create-2wc55\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.764624 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9367-account-create-update-j4flr"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.766112 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.773539 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.802558 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9367-account-create-update-j4flr"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.807768 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.814053 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zwtt\" (UniqueName: \"kubernetes.io/projected/a470f046-5473-4e59-9bb1-19eea38494e9-kube-api-access-8zwtt\") pod \"barbican-9367-account-create-update-j4flr\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.814209 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a470f046-5473-4e59-9bb1-19eea38494e9-operator-scripts\") pod \"barbican-9367-account-create-update-j4flr\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.831561 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-twr6s"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.833657 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.842895 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.843215 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.843385 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.843622 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ttwkl" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.896393 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-twr6s"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.915898 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zwtt\" (UniqueName: \"kubernetes.io/projected/a470f046-5473-4e59-9bb1-19eea38494e9-kube-api-access-8zwtt\") pod \"barbican-9367-account-create-update-j4flr\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.915994 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-config-data\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.916055 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8dx\" (UniqueName: \"kubernetes.io/projected/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-kube-api-access-mb8dx\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.916086 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a470f046-5473-4e59-9bb1-19eea38494e9-operator-scripts\") pod \"barbican-9367-account-create-update-j4flr\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.916208 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-combined-ca-bundle\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.917820 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a470f046-5473-4e59-9bb1-19eea38494e9-operator-scripts\") pod \"barbican-9367-account-create-update-j4flr\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.955888 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zwtt\" (UniqueName: \"kubernetes.io/projected/a470f046-5473-4e59-9bb1-19eea38494e9-kube-api-access-8zwtt\") pod \"barbican-9367-account-create-update-j4flr\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.958486 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-26sbc"] Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.959974 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:38 crc kubenswrapper[4688]: I0123 18:26:38.973345 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-26sbc"] Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.056309 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-combined-ca-bundle\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.057115 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-config-data\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.057352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8dx\" (UniqueName: \"kubernetes.io/projected/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-kube-api-access-mb8dx\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.057538 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0a1072-51bd-47a1-a3e0-740f34f179c3-operator-scripts\") pod \"neutron-db-create-26sbc\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.057794 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2bbb\" (UniqueName: \"kubernetes.io/projected/1f0a1072-51bd-47a1-a3e0-740f34f179c3-kube-api-access-g2bbb\") pod \"neutron-db-create-26sbc\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.066428 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.075661 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-combined-ca-bundle\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.082690 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-260f-account-create-update-8zf7b"] Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.086447 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.087833 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-config-data\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.092102 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.100279 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-260f-account-create-update-8zf7b"] Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.106709 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8dx\" (UniqueName: \"kubernetes.io/projected/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-kube-api-access-mb8dx\") pod \"keystone-db-sync-twr6s\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.122600 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.161653 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0a1072-51bd-47a1-a3e0-740f34f179c3-operator-scripts\") pod \"neutron-db-create-26sbc\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.161756 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2bbb\" (UniqueName: \"kubernetes.io/projected/1f0a1072-51bd-47a1-a3e0-740f34f179c3-kube-api-access-g2bbb\") pod \"neutron-db-create-26sbc\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.162934 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0a1072-51bd-47a1-a3e0-740f34f179c3-operator-scripts\") pod \"neutron-db-create-26sbc\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.173714 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twr6s" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.187687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2bbb\" (UniqueName: \"kubernetes.io/projected/1f0a1072-51bd-47a1-a3e0-740f34f179c3-kube-api-access-g2bbb\") pod \"neutron-db-create-26sbc\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.264549 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gv2r\" (UniqueName: \"kubernetes.io/projected/ce28bac3-dbde-4da0-82bc-60d85b10aec9-kube-api-access-9gv2r\") pod \"neutron-260f-account-create-update-8zf7b\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.264633 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce28bac3-dbde-4da0-82bc-60d85b10aec9-operator-scripts\") pod \"neutron-260f-account-create-update-8zf7b\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.328465 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2402796-b932-490a-852b-3e76ebe62cb9" containerID="99dfee13818eeaf2e6cd24ad65f0d7e058ac5f45387bec4a533060df18e182a1" exitCode=0 Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.328830 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2402796-b932-490a-852b-3e76ebe62cb9" containerID="a091e31cb13c10ff1ffc9f8d03db944ee73b04eb586f12292a67f2b8702d9629" exitCode=0 Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.328847 4688 generic.go:334] "Generic (PLEG): container finished" podID="f2402796-b932-490a-852b-3e76ebe62cb9" containerID="f4561b60e502bb26c5ab460caab8790f517f32e08bc50237ddc636327e42e1ed" exitCode=0 Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.328883 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerDied","Data":"99dfee13818eeaf2e6cd24ad65f0d7e058ac5f45387bec4a533060df18e182a1"} Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.328917 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerDied","Data":"a091e31cb13c10ff1ffc9f8d03db944ee73b04eb586f12292a67f2b8702d9629"} Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.328930 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerDied","Data":"f4561b60e502bb26c5ab460caab8790f517f32e08bc50237ddc636327e42e1ed"} Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.354232 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.366475 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gv2r\" (UniqueName: \"kubernetes.io/projected/ce28bac3-dbde-4da0-82bc-60d85b10aec9-kube-api-access-9gv2r\") pod \"neutron-260f-account-create-update-8zf7b\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.366542 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce28bac3-dbde-4da0-82bc-60d85b10aec9-operator-scripts\") pod \"neutron-260f-account-create-update-8zf7b\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.368258 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce28bac3-dbde-4da0-82bc-60d85b10aec9-operator-scripts\") pod \"neutron-260f-account-create-update-8zf7b\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.388533 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gv2r\" (UniqueName: \"kubernetes.io/projected/ce28bac3-dbde-4da0-82bc-60d85b10aec9-kube-api-access-9gv2r\") pod \"neutron-260f-account-create-update-8zf7b\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.424922 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:39 crc kubenswrapper[4688]: I0123 18:26:39.626726 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.109:9090/-/ready\": dial tcp 10.217.0.109:9090: connect: connection refused" Jan 23 18:26:42 crc kubenswrapper[4688]: E0123 18:26:42.681868 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 23 18:26:42 crc kubenswrapper[4688]: E0123 18:26:42.682761 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8pxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-wcz56_openstack(620ac0a5-247a-4207-83e0-d6776834d4ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:26:42 crc kubenswrapper[4688]: E0123 18:26:42.684019 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-wcz56" podUID="620ac0a5-247a-4207-83e0-d6776834d4ad" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.104983 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.252663 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-1\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.252740 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-0\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.252781 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-config\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.252843 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgftj\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-kube-api-access-xgftj\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.252893 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-tls-assets\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.252928 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f2402796-b932-490a-852b-3e76ebe62cb9-config-out\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.252981 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-2\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.253285 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.253380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-web-config\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.253482 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-thanos-prometheus-http-client-file\") pod \"f2402796-b932-490a-852b-3e76ebe62cb9\" (UID: \"f2402796-b932-490a-852b-3e76ebe62cb9\") " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.260340 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.261692 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.262645 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.267330 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.268360 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-kube-api-access-xgftj" (OuterVolumeSpecName: "kube-api-access-xgftj") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "kube-api-access-xgftj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.268427 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.269051 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-config" (OuterVolumeSpecName: "config") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.272561 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2402796-b932-490a-852b-3e76ebe62cb9-config-out" (OuterVolumeSpecName: "config-out") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.299602 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.301538 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-web-config" (OuterVolumeSpecName: "web-config") pod "f2402796-b932-490a-852b-3e76ebe62cb9" (UID: "f2402796-b932-490a-852b-3e76ebe62cb9"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.357361 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358040 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358059 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358074 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgftj\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-kube-api-access-xgftj\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358090 4688 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f2402796-b932-490a-852b-3e76ebe62cb9-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358102 4688 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f2402796-b932-490a-852b-3e76ebe62cb9-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358113 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f2402796-b932-490a-852b-3e76ebe62cb9-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358153 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") on node \"crc\" " Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358169 4688 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.358202 4688 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f2402796-b932-490a-852b-3e76ebe62cb9-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.377376 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f2402796-b932-490a-852b-3e76ebe62cb9","Type":"ContainerDied","Data":"6629d82bdc86fc70a07c626096714670cab7e1076acf98f24e83b771491ecf31"} Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.377462 4688 scope.go:117] "RemoveContainer" containerID="99dfee13818eeaf2e6cd24ad65f0d7e058ac5f45387bec4a533060df18e182a1" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.377394 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: E0123 18:26:43.398089 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-wcz56" podUID="620ac0a5-247a-4207-83e0-d6776834d4ad" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.427118 4688 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.427851 4688 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075") on node "crc" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.441394 4688 scope.go:117] "RemoveContainer" containerID="a091e31cb13c10ff1ffc9f8d03db944ee73b04eb586f12292a67f2b8702d9629" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.447882 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a15b-account-create-update-cgc5k"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.461417 4688 reconciler_common.go:293] "Volume detached for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.503780 4688 scope.go:117] "RemoveContainer" containerID="f4561b60e502bb26c5ab460caab8790f517f32e08bc50237ddc636327e42e1ed" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.514213 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.527786 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.543241 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:26:43 crc kubenswrapper[4688]: E0123 18:26:43.544461 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="init-config-reloader" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.544484 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="init-config-reloader" Jan 23 18:26:43 crc kubenswrapper[4688]: E0123 18:26:43.544502 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="thanos-sidecar" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.544508 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="thanos-sidecar" Jan 23 18:26:43 crc kubenswrapper[4688]: E0123 18:26:43.544528 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="prometheus" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.544534 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="prometheus" Jan 23 18:26:43 crc kubenswrapper[4688]: E0123 18:26:43.544547 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="config-reloader" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.544552 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="config-reloader" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.544835 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="prometheus" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.544849 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="config-reloader" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.544861 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" containerName="thanos-sidecar" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.546829 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.551415 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.572352 4688 scope.go:117] "RemoveContainer" containerID="585ae3e33bffd05e2b2826ae62c1b0404f2a737a72380e1324b3affb1e54855e" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.572722 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.572833 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.573127 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.573285 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.573612 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7vbgs" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.573751 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.574040 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.574115 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.578590 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669581 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669721 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669756 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669789 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-config\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669858 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669915 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.669977 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/138a44f4-e939-4138-8f9d-aae45c6aef1f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.670015 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.670076 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.670122 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.670155 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5g82\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-kube-api-access-l5g82\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.670226 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.696656 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.775676 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.776144 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.776170 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.776209 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.776245 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-config\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.776268 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.776348 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.776427 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/138a44f4-e939-4138-8f9d-aae45c6aef1f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.777140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.777534 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.777649 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.777705 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.777734 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5g82\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-kube-api-access-l5g82\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.777769 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.778511 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.780666 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.781835 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.782861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.787043 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.789306 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.789358 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b863878884b5da2d8536161babd136087c9985963bc488b510e2c38ec292fd7e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.790927 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-config\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.795617 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.801621 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/138a44f4-e939-4138-8f9d-aae45c6aef1f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.803328 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5g82\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-kube-api-access-l5g82\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.810340 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.824173 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-twr6s"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.826816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.842959 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9367-account-create-update-j4flr"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.858043 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-94mh9"] Jan 23 18:26:43 crc kubenswrapper[4688]: W0123 18:26:43.858525 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda470f046_5473_4e59_9bb1_19eea38494e9.slice/crio-bf48663f695ac11ae7c2f46bf64a48beeb7d9963349930390eac003ac54c5363 WatchSource:0}: Error finding container bf48663f695ac11ae7c2f46bf64a48beeb7d9963349930390eac003ac54c5363: Status 404 returned error can't find the container with id bf48663f695ac11ae7c2f46bf64a48beeb7d9963349930390eac003ac54c5363 Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.867227 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-260f-account-create-update-8zf7b"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.880084 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-26sbc"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.890933 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-zlf47"] Jan 23 18:26:43 crc kubenswrapper[4688]: W0123 18:26:43.893223 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod265a42d2_70db_43df_a5bf_99a70bfed1cb.slice/crio-bc7d551f9d6b1cd24e49a550a00586bb23db97367856b41bbd11667e0ecb4d22 WatchSource:0}: Error finding container bc7d551f9d6b1cd24e49a550a00586bb23db97367856b41bbd11667e0ecb4d22: Status 404 returned error can't find the container with id bc7d551f9d6b1cd24e49a550a00586bb23db97367856b41bbd11667e0ecb4d22 Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.894245 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.899247 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2wc55"] Jan 23 18:26:43 crc kubenswrapper[4688]: I0123 18:26:43.915857 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.404076 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zlf47" event={"ID":"265a42d2-70db-43df-a5bf-99a70bfed1cb","Type":"ContainerStarted","Data":"bc7d551f9d6b1cd24e49a550a00586bb23db97367856b41bbd11667e0ecb4d22"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.418992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"94d2b1a3b2a05cc157ae5011bfb157bf35bfbaf1b02df3e37804362d0a6636af"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.421441 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2wc55" event={"ID":"8ef29235-1e3f-4732-9770-24cf93856028","Type":"ContainerStarted","Data":"c0ac227ade430da3c0c8b161d89093ce2f769ee2b07ad0aff18a726222534a00"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.427883 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-94mh9" event={"ID":"ea982eec-acb6-45c7-8f69-36df2323747c","Type":"ContainerStarted","Data":"51aee3656b5089d30d3e12fb7d57a6fd82e6a13d79a6dd0c8efb4bc047a20d33"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.433721 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a15b-account-create-update-cgc5k" event={"ID":"e56e3474-2934-4305-8ebf-353db7dbc00a","Type":"ContainerStarted","Data":"ff7f7b9767f65aac9b4b3d3c2b52509bf239f9eb735e64ba3f49157a4e82751a"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.433863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a15b-account-create-update-cgc5k" event={"ID":"e56e3474-2934-4305-8ebf-353db7dbc00a","Type":"ContainerStarted","Data":"78e809d0f87b889c478e51ded59c4c1e2d43c8a5e640e9c98eb0f0a399a5736d"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.441580 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-26sbc" event={"ID":"1f0a1072-51bd-47a1-a3e0-740f34f179c3","Type":"ContainerStarted","Data":"0ca2d6325783c894dd4bab5e5f45e54367a1b60ad0c99ace72905f48a4d290cc"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.441668 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-26sbc" event={"ID":"1f0a1072-51bd-47a1-a3e0-740f34f179c3","Type":"ContainerStarted","Data":"aa96f5076e76f0f4a152f2366b6c02bd98458642ca80464b05d2b9b237f920de"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.444578 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twr6s" event={"ID":"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe","Type":"ContainerStarted","Data":"fbf8cf13af41ab659eec5db26e5572c06828b364cf198333d27becb54cb5bd68"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.449995 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9367-account-create-update-j4flr" event={"ID":"a470f046-5473-4e59-9bb1-19eea38494e9","Type":"ContainerStarted","Data":"bf48663f695ac11ae7c2f46bf64a48beeb7d9963349930390eac003ac54c5363"} Jan 23 18:26:44 crc kubenswrapper[4688]: I0123 18:26:44.454719 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-260f-account-create-update-8zf7b" event={"ID":"ce28bac3-dbde-4da0-82bc-60d85b10aec9","Type":"ContainerStarted","Data":"054c5684e9cd143eec64f4539ef27e5cca2abf08169525c43e21038b63f3b204"} Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:44.492816 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-a15b-account-create-update-cgc5k" podStartSLOduration=6.492786862 podStartE2EDuration="6.492786862s" podCreationTimestamp="2026-01-23 18:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:26:44.454944039 +0000 UTC m=+1199.450768500" watchObservedRunningTime="2026-01-23 18:26:44.492786862 +0000 UTC m=+1199.488611313" Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:44.518688 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-26sbc" podStartSLOduration=6.518655192 podStartE2EDuration="6.518655192s" podCreationTimestamp="2026-01-23 18:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:26:44.484089086 +0000 UTC m=+1199.479913527" watchObservedRunningTime="2026-01-23 18:26:44.518655192 +0000 UTC m=+1199.514479633" Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:44.786396 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.375601 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2402796-b932-490a-852b-3e76ebe62cb9" path="/var/lib/kubelet/pods/f2402796-b932-490a-852b-3e76ebe62cb9/volumes" Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.477973 4688 generic.go:334] "Generic (PLEG): container finished" podID="8ef29235-1e3f-4732-9770-24cf93856028" containerID="d6f6b5a36a1b6cdafc898d6d21bb11eeb00a86b94adeb2b2208e3a8e3eb189ba" exitCode=0 Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.478049 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2wc55" event={"ID":"8ef29235-1e3f-4732-9770-24cf93856028","Type":"ContainerDied","Data":"d6f6b5a36a1b6cdafc898d6d21bb11eeb00a86b94adeb2b2208e3a8e3eb189ba"} Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.486635 4688 generic.go:334] "Generic (PLEG): container finished" podID="ce28bac3-dbde-4da0-82bc-60d85b10aec9" containerID="699f940ec7a41e2912d2fc73d69bcbae46459e1c9f12cc086bba4cd5530824e1" exitCode=0 Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.486734 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-260f-account-create-update-8zf7b" event={"ID":"ce28bac3-dbde-4da0-82bc-60d85b10aec9","Type":"ContainerDied","Data":"699f940ec7a41e2912d2fc73d69bcbae46459e1c9f12cc086bba4cd5530824e1"} Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.498631 4688 generic.go:334] "Generic (PLEG): container finished" podID="265a42d2-70db-43df-a5bf-99a70bfed1cb" containerID="87d3014248fb5e3be16e492a5ffdfb790086341fc372322827d896207bbacbd4" exitCode=0 Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.498769 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zlf47" event={"ID":"265a42d2-70db-43df-a5bf-99a70bfed1cb","Type":"ContainerDied","Data":"87d3014248fb5e3be16e492a5ffdfb790086341fc372322827d896207bbacbd4"} Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.512947 4688 generic.go:334] "Generic (PLEG): container finished" podID="e56e3474-2934-4305-8ebf-353db7dbc00a" containerID="ff7f7b9767f65aac9b4b3d3c2b52509bf239f9eb735e64ba3f49157a4e82751a" exitCode=0 Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.513040 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a15b-account-create-update-cgc5k" event={"ID":"e56e3474-2934-4305-8ebf-353db7dbc00a","Type":"ContainerDied","Data":"ff7f7b9767f65aac9b4b3d3c2b52509bf239f9eb735e64ba3f49157a4e82751a"} Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.518942 4688 generic.go:334] "Generic (PLEG): container finished" podID="1f0a1072-51bd-47a1-a3e0-740f34f179c3" containerID="0ca2d6325783c894dd4bab5e5f45e54367a1b60ad0c99ace72905f48a4d290cc" exitCode=0 Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.519027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-26sbc" event={"ID":"1f0a1072-51bd-47a1-a3e0-740f34f179c3","Type":"ContainerDied","Data":"0ca2d6325783c894dd4bab5e5f45e54367a1b60ad0c99ace72905f48a4d290cc"} Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.523877 4688 generic.go:334] "Generic (PLEG): container finished" podID="a470f046-5473-4e59-9bb1-19eea38494e9" containerID="55f3f1d05edb20baf39e8880bc576fb22722a09291fddae8e20128c31dd602e3" exitCode=0 Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.524003 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9367-account-create-update-j4flr" event={"ID":"a470f046-5473-4e59-9bb1-19eea38494e9","Type":"ContainerDied","Data":"55f3f1d05edb20baf39e8880bc576fb22722a09291fddae8e20128c31dd602e3"} Jan 23 18:26:45 crc kubenswrapper[4688]: I0123 18:26:45.526266 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerStarted","Data":"ca76d0b238b164bee22636e006578ef8df671f8532d749e718e9ce5f7f41decc"} Jan 23 18:26:46 crc kubenswrapper[4688]: I0123 18:26:46.541750 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"cb890a80a060dbc1a226d5e182b9a23aad34a0195238bfaecfd83e9657b44e55"} Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.321757 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.346956 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.350690 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.397742 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce28bac3-dbde-4da0-82bc-60d85b10aec9-operator-scripts\") pod \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.398162 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zwtt\" (UniqueName: \"kubernetes.io/projected/a470f046-5473-4e59-9bb1-19eea38494e9-kube-api-access-8zwtt\") pod \"a470f046-5473-4e59-9bb1-19eea38494e9\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.398231 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gv2r\" (UniqueName: \"kubernetes.io/projected/ce28bac3-dbde-4da0-82bc-60d85b10aec9-kube-api-access-9gv2r\") pod \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\" (UID: \"ce28bac3-dbde-4da0-82bc-60d85b10aec9\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.398619 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.399398 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce28bac3-dbde-4da0-82bc-60d85b10aec9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce28bac3-dbde-4da0-82bc-60d85b10aec9" (UID: "ce28bac3-dbde-4da0-82bc-60d85b10aec9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.400708 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a470f046-5473-4e59-9bb1-19eea38494e9-operator-scripts\") pod \"a470f046-5473-4e59-9bb1-19eea38494e9\" (UID: \"a470f046-5473-4e59-9bb1-19eea38494e9\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.403913 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a470f046-5473-4e59-9bb1-19eea38494e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a470f046-5473-4e59-9bb1-19eea38494e9" (UID: "a470f046-5473-4e59-9bb1-19eea38494e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.404066 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.406383 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a470f046-5473-4e59-9bb1-19eea38494e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.406409 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce28bac3-dbde-4da0-82bc-60d85b10aec9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.412999 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce28bac3-dbde-4da0-82bc-60d85b10aec9-kube-api-access-9gv2r" (OuterVolumeSpecName: "kube-api-access-9gv2r") pod "ce28bac3-dbde-4da0-82bc-60d85b10aec9" (UID: "ce28bac3-dbde-4da0-82bc-60d85b10aec9"). InnerVolumeSpecName "kube-api-access-9gv2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.413675 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.415624 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a470f046-5473-4e59-9bb1-19eea38494e9-kube-api-access-8zwtt" (OuterVolumeSpecName: "kube-api-access-8zwtt") pod "a470f046-5473-4e59-9bb1-19eea38494e9" (UID: "a470f046-5473-4e59-9bb1-19eea38494e9"). InnerVolumeSpecName "kube-api-access-8zwtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.507604 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2bbb\" (UniqueName: \"kubernetes.io/projected/1f0a1072-51bd-47a1-a3e0-740f34f179c3-kube-api-access-g2bbb\") pod \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.507680 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265a42d2-70db-43df-a5bf-99a70bfed1cb-operator-scripts\") pod \"265a42d2-70db-43df-a5bf-99a70bfed1cb\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.507753 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ef29235-1e3f-4732-9770-24cf93856028-operator-scripts\") pod \"8ef29235-1e3f-4732-9770-24cf93856028\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.507894 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56e3474-2934-4305-8ebf-353db7dbc00a-operator-scripts\") pod \"e56e3474-2934-4305-8ebf-353db7dbc00a\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508005 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0a1072-51bd-47a1-a3e0-740f34f179c3-operator-scripts\") pod \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\" (UID: \"1f0a1072-51bd-47a1-a3e0-740f34f179c3\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508036 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhh5m\" (UniqueName: \"kubernetes.io/projected/8ef29235-1e3f-4732-9770-24cf93856028-kube-api-access-lhh5m\") pod \"8ef29235-1e3f-4732-9770-24cf93856028\" (UID: \"8ef29235-1e3f-4732-9770-24cf93856028\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508089 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbrlb\" (UniqueName: \"kubernetes.io/projected/265a42d2-70db-43df-a5bf-99a70bfed1cb-kube-api-access-tbrlb\") pod \"265a42d2-70db-43df-a5bf-99a70bfed1cb\" (UID: \"265a42d2-70db-43df-a5bf-99a70bfed1cb\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508117 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfck5\" (UniqueName: \"kubernetes.io/projected/e56e3474-2934-4305-8ebf-353db7dbc00a-kube-api-access-wfck5\") pod \"e56e3474-2934-4305-8ebf-353db7dbc00a\" (UID: \"e56e3474-2934-4305-8ebf-353db7dbc00a\") " Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508586 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zwtt\" (UniqueName: \"kubernetes.io/projected/a470f046-5473-4e59-9bb1-19eea38494e9-kube-api-access-8zwtt\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508606 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gv2r\" (UniqueName: \"kubernetes.io/projected/ce28bac3-dbde-4da0-82bc-60d85b10aec9-kube-api-access-9gv2r\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508866 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f0a1072-51bd-47a1-a3e0-740f34f179c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f0a1072-51bd-47a1-a3e0-740f34f179c3" (UID: "1f0a1072-51bd-47a1-a3e0-740f34f179c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508896 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ef29235-1e3f-4732-9770-24cf93856028-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ef29235-1e3f-4732-9770-24cf93856028" (UID: "8ef29235-1e3f-4732-9770-24cf93856028"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.508977 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265a42d2-70db-43df-a5bf-99a70bfed1cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "265a42d2-70db-43df-a5bf-99a70bfed1cb" (UID: "265a42d2-70db-43df-a5bf-99a70bfed1cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.509079 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e56e3474-2934-4305-8ebf-353db7dbc00a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e56e3474-2934-4305-8ebf-353db7dbc00a" (UID: "e56e3474-2934-4305-8ebf-353db7dbc00a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.551628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a15b-account-create-update-cgc5k" event={"ID":"e56e3474-2934-4305-8ebf-353db7dbc00a","Type":"ContainerDied","Data":"78e809d0f87b889c478e51ded59c4c1e2d43c8a5e640e9c98eb0f0a399a5736d"} Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.551682 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78e809d0f87b889c478e51ded59c4c1e2d43c8a5e640e9c98eb0f0a399a5736d" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.551753 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a15b-account-create-update-cgc5k" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.553754 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-26sbc" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.553759 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-26sbc" event={"ID":"1f0a1072-51bd-47a1-a3e0-740f34f179c3","Type":"ContainerDied","Data":"aa96f5076e76f0f4a152f2366b6c02bd98458642ca80464b05d2b9b237f920de"} Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.553867 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa96f5076e76f0f4a152f2366b6c02bd98458642ca80464b05d2b9b237f920de" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.555438 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9367-account-create-update-j4flr" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.555435 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9367-account-create-update-j4flr" event={"ID":"a470f046-5473-4e59-9bb1-19eea38494e9","Type":"ContainerDied","Data":"bf48663f695ac11ae7c2f46bf64a48beeb7d9963349930390eac003ac54c5363"} Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.555540 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf48663f695ac11ae7c2f46bf64a48beeb7d9963349930390eac003ac54c5363" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.560217 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2wc55" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.560214 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2wc55" event={"ID":"8ef29235-1e3f-4732-9770-24cf93856028","Type":"ContainerDied","Data":"c0ac227ade430da3c0c8b161d89093ce2f769ee2b07ad0aff18a726222534a00"} Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.560320 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ac227ade430da3c0c8b161d89093ce2f769ee2b07ad0aff18a726222534a00" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.562793 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-260f-account-create-update-8zf7b" event={"ID":"ce28bac3-dbde-4da0-82bc-60d85b10aec9","Type":"ContainerDied","Data":"054c5684e9cd143eec64f4539ef27e5cca2abf08169525c43e21038b63f3b204"} Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.562822 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="054c5684e9cd143eec64f4539ef27e5cca2abf08169525c43e21038b63f3b204" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.562856 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-260f-account-create-update-8zf7b" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.571351 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-zlf47" event={"ID":"265a42d2-70db-43df-a5bf-99a70bfed1cb","Type":"ContainerDied","Data":"bc7d551f9d6b1cd24e49a550a00586bb23db97367856b41bbd11667e0ecb4d22"} Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.571393 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc7d551f9d6b1cd24e49a550a00586bb23db97367856b41bbd11667e0ecb4d22" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.571410 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-zlf47" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.610719 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265a42d2-70db-43df-a5bf-99a70bfed1cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.610762 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ef29235-1e3f-4732-9770-24cf93856028-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.610774 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e56e3474-2934-4305-8ebf-353db7dbc00a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.610784 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f0a1072-51bd-47a1-a3e0-740f34f179c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.668449 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265a42d2-70db-43df-a5bf-99a70bfed1cb-kube-api-access-tbrlb" (OuterVolumeSpecName: "kube-api-access-tbrlb") pod "265a42d2-70db-43df-a5bf-99a70bfed1cb" (UID: "265a42d2-70db-43df-a5bf-99a70bfed1cb"). InnerVolumeSpecName "kube-api-access-tbrlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.713419 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbrlb\" (UniqueName: \"kubernetes.io/projected/265a42d2-70db-43df-a5bf-99a70bfed1cb-kube-api-access-tbrlb\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.766996 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56e3474-2934-4305-8ebf-353db7dbc00a-kube-api-access-wfck5" (OuterVolumeSpecName: "kube-api-access-wfck5") pod "e56e3474-2934-4305-8ebf-353db7dbc00a" (UID: "e56e3474-2934-4305-8ebf-353db7dbc00a"). InnerVolumeSpecName "kube-api-access-wfck5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.767222 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef29235-1e3f-4732-9770-24cf93856028-kube-api-access-lhh5m" (OuterVolumeSpecName: "kube-api-access-lhh5m") pod "8ef29235-1e3f-4732-9770-24cf93856028" (UID: "8ef29235-1e3f-4732-9770-24cf93856028"). InnerVolumeSpecName "kube-api-access-lhh5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.767385 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0a1072-51bd-47a1-a3e0-740f34f179c3-kube-api-access-g2bbb" (OuterVolumeSpecName: "kube-api-access-g2bbb") pod "1f0a1072-51bd-47a1-a3e0-740f34f179c3" (UID: "1f0a1072-51bd-47a1-a3e0-740f34f179c3"). InnerVolumeSpecName "kube-api-access-g2bbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.816056 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhh5m\" (UniqueName: \"kubernetes.io/projected/8ef29235-1e3f-4732-9770-24cf93856028-kube-api-access-lhh5m\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.816101 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfck5\" (UniqueName: \"kubernetes.io/projected/e56e3474-2934-4305-8ebf-353db7dbc00a-kube-api-access-wfck5\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:47 crc kubenswrapper[4688]: I0123 18:26:47.816111 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2bbb\" (UniqueName: \"kubernetes.io/projected/1f0a1072-51bd-47a1-a3e0-740f34f179c3-kube-api-access-g2bbb\") on node \"crc\" DevicePath \"\"" Jan 23 18:26:48 crc kubenswrapper[4688]: I0123 18:26:48.587946 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerStarted","Data":"8535910b0624778667f6ed21e1126b11bf194bcc76875fe4a5c9cfeab8771ea0"} Jan 23 18:26:48 crc kubenswrapper[4688]: I0123 18:26:48.596468 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"ff7b22d9fc694f30f9eb1d151c79642a7b8cb6adf9043d4b5defc06b6b5dcf76"} Jan 23 18:26:56 crc kubenswrapper[4688]: I0123 18:26:56.740104 4688 generic.go:334] "Generic (PLEG): container finished" podID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerID="8535910b0624778667f6ed21e1126b11bf194bcc76875fe4a5c9cfeab8771ea0" exitCode=0 Jan 23 18:26:56 crc kubenswrapper[4688]: I0123 18:26:56.740229 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerDied","Data":"8535910b0624778667f6ed21e1126b11bf194bcc76875fe4a5c9cfeab8771ea0"} Jan 23 18:27:06 crc kubenswrapper[4688]: I0123 18:27:06.965862 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:27:06 crc kubenswrapper[4688]: I0123 18:27:06.966472 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:27:08 crc kubenswrapper[4688]: E0123 18:27:08.254330 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.35:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 23 18:27:08 crc kubenswrapper[4688]: E0123 18:27:08.255434 4688 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.35:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Jan 23 18:27:08 crc kubenswrapper[4688]: E0123 18:27:08.255748 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.129.56.35:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jggfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-94mh9_openstack(ea982eec-acb6-45c7-8f69-36df2323747c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:27:08 crc kubenswrapper[4688]: E0123 18:27:08.257319 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-94mh9" podUID="ea982eec-acb6-45c7-8f69-36df2323747c" Jan 23 18:27:08 crc kubenswrapper[4688]: I0123 18:27:08.858876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerStarted","Data":"b606d533ad41879be922e9db33b200589a61c425d40ec7008b8e132b3dd84b07"} Jan 23 18:27:08 crc kubenswrapper[4688]: I0123 18:27:08.862268 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"88311d2a85e7f5b83fd770f76ca7d2d1cccabe4a8a0ff8a8dabb0f641bdf0d1a"} Jan 23 18:27:08 crc kubenswrapper[4688]: I0123 18:27:08.864916 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twr6s" event={"ID":"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe","Type":"ContainerStarted","Data":"f42b7e77fb6c22271ab3fd2c8a41bb234e30a210d262dce7445ac71435e65202"} Jan 23 18:27:08 crc kubenswrapper[4688]: E0123 18:27:08.868232 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.35:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-94mh9" podUID="ea982eec-acb6-45c7-8f69-36df2323747c" Jan 23 18:27:08 crc kubenswrapper[4688]: I0123 18:27:08.895702 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-twr6s" podStartSLOduration=6.532218092 podStartE2EDuration="30.895677366s" podCreationTimestamp="2026-01-23 18:26:38 +0000 UTC" firstStartedPulling="2026-01-23 18:26:43.835311571 +0000 UTC m=+1198.831136002" lastFinishedPulling="2026-01-23 18:27:08.198770825 +0000 UTC m=+1223.194595276" observedRunningTime="2026-01-23 18:27:08.885852337 +0000 UTC m=+1223.881676788" watchObservedRunningTime="2026-01-23 18:27:08.895677366 +0000 UTC m=+1223.891501807" Jan 23 18:27:09 crc kubenswrapper[4688]: I0123 18:27:09.880372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"622620ac7c68eee97d4a212753b728ec0cc4364c4a332729b7ecd3f900f03fd7"} Jan 23 18:27:09 crc kubenswrapper[4688]: I0123 18:27:09.883932 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wcz56" event={"ID":"620ac0a5-247a-4207-83e0-d6776834d4ad","Type":"ContainerStarted","Data":"f2bd73f8aadf30071096c98a62dd31573c124f2a1985baff609e903d5d7f7172"} Jan 23 18:27:09 crc kubenswrapper[4688]: I0123 18:27:09.914645 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wcz56" podStartSLOduration=3.277278923 podStartE2EDuration="48.914616676s" podCreationTimestamp="2026-01-23 18:26:21 +0000 UTC" firstStartedPulling="2026-01-23 18:26:22.772176218 +0000 UTC m=+1177.768000659" lastFinishedPulling="2026-01-23 18:27:08.409513971 +0000 UTC m=+1223.405338412" observedRunningTime="2026-01-23 18:27:09.906506427 +0000 UTC m=+1224.902330868" watchObservedRunningTime="2026-01-23 18:27:09.914616676 +0000 UTC m=+1224.910441117" Jan 23 18:27:10 crc kubenswrapper[4688]: I0123 18:27:10.897311 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"952ffdec1b90d19067e6c3676a332c42aa8dd16ff512af4583fb8f81ee6a40c4"} Jan 23 18:27:10 crc kubenswrapper[4688]: I0123 18:27:10.897798 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"cbcf584609ba94217c5fb1e2c8a409726c5b6131ce4093bbdc5455570863f8bb"} Jan 23 18:27:11 crc kubenswrapper[4688]: I0123 18:27:11.925866 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerStarted","Data":"3aff1ace720d5b537d98cf9c6d20923f424d2d22f33058b8f3ea933ecd9eb3b0"} Jan 23 18:27:11 crc kubenswrapper[4688]: I0123 18:27:11.940412 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"1e23f50954bfbf6be62c43a82ea405101f9d43e6dd5ead4f5e5f22e4c9a9bcf5"} Jan 23 18:27:11 crc kubenswrapper[4688]: I0123 18:27:11.940494 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"4cc0d914b4e80997348f9f95508f03870e2d643ed86c0a1fd830738127b14818"} Jan 23 18:27:12 crc kubenswrapper[4688]: I0123 18:27:12.956235 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerStarted","Data":"bc398ab3de719e1581112306e319393e864bd7705c14c74761f7db65f3ec03c4"} Jan 23 18:27:12 crc kubenswrapper[4688]: I0123 18:27:12.992269 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=29.992240809 podStartE2EDuration="29.992240809s" podCreationTimestamp="2026-01-23 18:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:12.983689537 +0000 UTC m=+1227.979513998" watchObservedRunningTime="2026-01-23 18:27:12.992240809 +0000 UTC m=+1227.988065250" Jan 23 18:27:13 crc kubenswrapper[4688]: I0123 18:27:13.917345 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 18:27:13 crc kubenswrapper[4688]: I0123 18:27:13.917694 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 18:27:13 crc kubenswrapper[4688]: I0123 18:27:13.975053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"a58a0ae19d6c7cd202fc2b6787119551e5acc9f6d2d14716071fc58770ed8678"} Jan 23 18:27:13 crc kubenswrapper[4688]: I0123 18:27:13.975104 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"1afb0c28b1502111135d55d08e7ef323557bdefc1ab9eb9576cd4e42f03b5be6"} Jan 23 18:27:13 crc kubenswrapper[4688]: I0123 18:27:13.975121 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"bd531cba233b31fc4094b3b25dc808e46ccbc3cd5a5a8762af277146928becf4"} Jan 23 18:27:13 crc kubenswrapper[4688]: I0123 18:27:13.980818 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 18:27:14 crc kubenswrapper[4688]: I0123 18:27:14.990432 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"326cd8cc6ad525b790ec1853e4554c10453547e40e5f42a6483f4a1b650a62c6"} Jan 23 18:27:14 crc kubenswrapper[4688]: I0123 18:27:14.990817 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"b699ea45e7963a2d41affb0a5d56c7e1d6091f1d1cc832a246548af0fb0885fb"} Jan 23 18:27:14 crc kubenswrapper[4688]: I0123 18:27:14.990837 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"7c635b988af2f55090db8e6cbcce4088573bc65aee02e8aca0ea0db34e3283ef"} Jan 23 18:27:14 crc kubenswrapper[4688]: I0123 18:27:14.990881 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ccb24002-aac7-4341-b434-58189d7792e5","Type":"ContainerStarted","Data":"d1d4846962444f5cb832293c46ae5c997606ee241d8220ce631d57d9b6e03def"} Jan 23 18:27:14 crc kubenswrapper[4688]: I0123 18:27:14.997880 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.079705 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=42.932656579 podStartE2EDuration="1m12.079669455s" podCreationTimestamp="2026-01-23 18:26:03 +0000 UTC" firstStartedPulling="2026-01-23 18:26:43.688939557 +0000 UTC m=+1198.684764008" lastFinishedPulling="2026-01-23 18:27:12.835952443 +0000 UTC m=+1227.831776884" observedRunningTime="2026-01-23 18:27:15.039145973 +0000 UTC m=+1230.034970444" watchObservedRunningTime="2026-01-23 18:27:15.079669455 +0000 UTC m=+1230.075493896" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.462877 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-68v7f"] Jan 23 18:27:15 crc kubenswrapper[4688]: E0123 18:27:15.464011 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56e3474-2934-4305-8ebf-353db7dbc00a" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464044 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56e3474-2934-4305-8ebf-353db7dbc00a" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: E0123 18:27:15.464062 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce28bac3-dbde-4da0-82bc-60d85b10aec9" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464077 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce28bac3-dbde-4da0-82bc-60d85b10aec9" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: E0123 18:27:15.464095 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a470f046-5473-4e59-9bb1-19eea38494e9" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464107 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a470f046-5473-4e59-9bb1-19eea38494e9" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: E0123 18:27:15.464120 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265a42d2-70db-43df-a5bf-99a70bfed1cb" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464128 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="265a42d2-70db-43df-a5bf-99a70bfed1cb" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: E0123 18:27:15.464141 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0a1072-51bd-47a1-a3e0-740f34f179c3" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464149 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0a1072-51bd-47a1-a3e0-740f34f179c3" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: E0123 18:27:15.464168 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef29235-1e3f-4732-9770-24cf93856028" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464177 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef29235-1e3f-4732-9770-24cf93856028" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464466 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0a1072-51bd-47a1-a3e0-740f34f179c3" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464485 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56e3474-2934-4305-8ebf-353db7dbc00a" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464502 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef29235-1e3f-4732-9770-24cf93856028" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464512 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a470f046-5473-4e59-9bb1-19eea38494e9" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464524 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce28bac3-dbde-4da0-82bc-60d85b10aec9" containerName="mariadb-account-create-update" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.464545 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="265a42d2-70db-43df-a5bf-99a70bfed1cb" containerName="mariadb-database-create" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.466181 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.475390 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.499796 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-68v7f"] Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.575547 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.575666 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-config\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.575852 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.576043 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.576105 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnvb\" (UniqueName: \"kubernetes.io/projected/ce119397-ad11-436e-9349-13c21b15e852-kube-api-access-4qnvb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.576512 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-svc\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.678125 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-svc\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.678273 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.678355 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-config\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.678387 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.678442 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.678474 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnvb\" (UniqueName: \"kubernetes.io/projected/ce119397-ad11-436e-9349-13c21b15e852-kube-api-access-4qnvb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.679391 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.679400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-svc\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.679766 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.680061 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.683726 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-config\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.724607 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnvb\" (UniqueName: \"kubernetes.io/projected/ce119397-ad11-436e-9349-13c21b15e852-kube-api-access-4qnvb\") pod \"dnsmasq-dns-764c5664d7-68v7f\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:15 crc kubenswrapper[4688]: I0123 18:27:15.789028 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:16 crc kubenswrapper[4688]: I0123 18:27:16.006353 4688 generic.go:334] "Generic (PLEG): container finished" podID="4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" containerID="f42b7e77fb6c22271ab3fd2c8a41bb234e30a210d262dce7445ac71435e65202" exitCode=0 Jan 23 18:27:16 crc kubenswrapper[4688]: I0123 18:27:16.006490 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twr6s" event={"ID":"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe","Type":"ContainerDied","Data":"f42b7e77fb6c22271ab3fd2c8a41bb234e30a210d262dce7445ac71435e65202"} Jan 23 18:27:16 crc kubenswrapper[4688]: I0123 18:27:16.269486 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-68v7f"] Jan 23 18:27:16 crc kubenswrapper[4688]: W0123 18:27:16.275782 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce119397_ad11_436e_9349_13c21b15e852.slice/crio-f661c662e5d67a7e183bacefeb7b079951e1df083fe5d5c5e51b5abcdaad9e14 WatchSource:0}: Error finding container f661c662e5d67a7e183bacefeb7b079951e1df083fe5d5c5e51b5abcdaad9e14: Status 404 returned error can't find the container with id f661c662e5d67a7e183bacefeb7b079951e1df083fe5d5c5e51b5abcdaad9e14 Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.022793 4688 generic.go:334] "Generic (PLEG): container finished" podID="ce119397-ad11-436e-9349-13c21b15e852" containerID="24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699" exitCode=0 Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.023787 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" event={"ID":"ce119397-ad11-436e-9349-13c21b15e852","Type":"ContainerDied","Data":"24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699"} Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.023852 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" event={"ID":"ce119397-ad11-436e-9349-13c21b15e852","Type":"ContainerStarted","Data":"f661c662e5d67a7e183bacefeb7b079951e1df083fe5d5c5e51b5abcdaad9e14"} Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.411927 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twr6s" Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.517787 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb8dx\" (UniqueName: \"kubernetes.io/projected/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-kube-api-access-mb8dx\") pod \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.517929 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-config-data\") pod \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.518165 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-combined-ca-bundle\") pod \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\" (UID: \"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe\") " Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.523712 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-kube-api-access-mb8dx" (OuterVolumeSpecName: "kube-api-access-mb8dx") pod "4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" (UID: "4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe"). InnerVolumeSpecName "kube-api-access-mb8dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.548260 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" (UID: "4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.563493 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-config-data" (OuterVolumeSpecName: "config-data") pod "4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" (UID: "4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.622060 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.622149 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb8dx\" (UniqueName: \"kubernetes.io/projected/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-kube-api-access-mb8dx\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:17 crc kubenswrapper[4688]: I0123 18:27:17.622166 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.033885 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-twr6s" event={"ID":"4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe","Type":"ContainerDied","Data":"fbf8cf13af41ab659eec5db26e5572c06828b364cf198333d27becb54cb5bd68"} Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.033928 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbf8cf13af41ab659eec5db26e5572c06828b364cf198333d27becb54cb5bd68" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.033986 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-twr6s" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.040521 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" event={"ID":"ce119397-ad11-436e-9349-13c21b15e852","Type":"ContainerStarted","Data":"01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df"} Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.040706 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.089095 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" podStartSLOduration=3.08906464 podStartE2EDuration="3.08906464s" podCreationTimestamp="2026-01-23 18:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:18.072916735 +0000 UTC m=+1233.068741176" watchObservedRunningTime="2026-01-23 18:27:18.08906464 +0000 UTC m=+1233.084889081" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.330112 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-68v7f"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.357628 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4bsn8"] Jan 23 18:27:18 crc kubenswrapper[4688]: E0123 18:27:18.358285 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" containerName="keystone-db-sync" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.358318 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" containerName="keystone-db-sync" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.358627 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" containerName="keystone-db-sync" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.359529 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.362274 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.363119 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.371946 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.372036 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.372444 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ttwkl" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.395876 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4bsn8"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.413507 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-bp9tr"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.415384 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.479841 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-bp9tr"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541515 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-fernet-keys\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-config-data\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541632 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-scripts\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541668 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-combined-ca-bundle\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541703 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541731 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krl97\" (UniqueName: \"kubernetes.io/projected/eb1f5986-e1fc-49b5-aee4-9680a4338e84-kube-api-access-krl97\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541799 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrgw\" (UniqueName: \"kubernetes.io/projected/90106b59-2826-4770-8211-3ff275cb56fa-kube-api-access-mgrgw\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541827 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541874 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541902 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-credential-keys\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.541928 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-config\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.542003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.545786 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vsp8t"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.547407 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.551864 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.558705 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2f2qs" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.564166 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.574605 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vsp8t"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.643328 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68fdb6575c-9fggx"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.644827 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-db-sync-config-data\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.644925 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-config-data\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.644999 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-scripts\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645045 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-combined-ca-bundle\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645077 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645107 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krl97\" (UniqueName: \"kubernetes.io/projected/eb1f5986-e1fc-49b5-aee4-9680a4338e84-kube-api-access-krl97\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645150 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-config-data\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645180 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-scripts\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645233 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrgw\" (UniqueName: \"kubernetes.io/projected/90106b59-2826-4770-8211-3ff275cb56fa-kube-api-access-mgrgw\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645262 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645309 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645335 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-credential-keys\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645357 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-config\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645397 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7h62\" (UniqueName: \"kubernetes.io/projected/b8d25eb5-0041-42b6-8b61-ad9e728c3049-kube-api-access-k7h62\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645454 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-combined-ca-bundle\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645481 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645510 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8d25eb5-0041-42b6-8b61-ad9e728c3049-etc-machine-id\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.645541 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-fernet-keys\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.646826 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.647902 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.648017 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.650326 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.651088 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-config\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.651929 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.653827 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68fdb6575c-9fggx"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.659021 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.660280 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-scripts\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.660737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-config-data\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.661707 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.661957 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.662138 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-lgf7j" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.666041 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-fernet-keys\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.677893 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-combined-ca-bundle\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.680406 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-credential-keys\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.688203 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krl97\" (UniqueName: \"kubernetes.io/projected/eb1f5986-e1fc-49b5-aee4-9680a4338e84-kube-api-access-krl97\") pod \"dnsmasq-dns-5959f8865f-bp9tr\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.699026 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrgw\" (UniqueName: \"kubernetes.io/projected/90106b59-2826-4770-8211-3ff275cb56fa-kube-api-access-mgrgw\") pod \"keystone-bootstrap-4bsn8\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.719556 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.722807 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.732258 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.745717 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.746689 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747213 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51bf7ae1-482b-45a8-b540-8282f867b3c8-horizon-secret-key\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747286 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-config-data\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-scripts\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747365 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-config-data\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747402 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8p2t\" (UniqueName: \"kubernetes.io/projected/51bf7ae1-482b-45a8-b540-8282f867b3c8-kube-api-access-j8p2t\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747456 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7h62\" (UniqueName: \"kubernetes.io/projected/b8d25eb5-0041-42b6-8b61-ad9e728c3049-kube-api-access-k7h62\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747513 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-scripts\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747539 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-combined-ca-bundle\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747577 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8d25eb5-0041-42b6-8b61-ad9e728c3049-etc-machine-id\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747617 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bf7ae1-482b-45a8-b540-8282f867b3c8-logs\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.747650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-db-sync-config-data\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.754818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8d25eb5-0041-42b6-8b61-ad9e728c3049-etc-machine-id\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.761272 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-db-sync-config-data\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.761961 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-scripts\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.763126 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-config-data\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.768828 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-combined-ca-bundle\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.810176 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xmgh7"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.812418 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.823240 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7h62\" (UniqueName: \"kubernetes.io/projected/b8d25eb5-0041-42b6-8b61-ad9e728c3049-kube-api-access-k7h62\") pod \"cinder-db-sync-vsp8t\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.823988 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.846736 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-htzlw" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.852962 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853010 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-config-data\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853042 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-run-httpd\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853071 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-scripts\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853110 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-log-httpd\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853157 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bf7ae1-482b-45a8-b540-8282f867b3c8-logs\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853194 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-scripts\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853228 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853261 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51bf7ae1-482b-45a8-b540-8282f867b3c8-horizon-secret-key\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853304 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q866\" (UniqueName: \"kubernetes.io/projected/cbbd26aa-7783-4958-95d0-a590f636947c-kube-api-access-7q866\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853327 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-config-data\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.853360 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8p2t\" (UniqueName: \"kubernetes.io/projected/51bf7ae1-482b-45a8-b540-8282f867b3c8-kube-api-access-j8p2t\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.860098 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-scripts\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.865994 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9lvbq"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.866745 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bf7ae1-482b-45a8-b540-8282f867b3c8-logs\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.867241 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51bf7ae1-482b-45a8-b540-8282f867b3c8-horizon-secret-key\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.867675 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-config-data\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.867831 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.870335 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.871205 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f44g6" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.877607 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.880518 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.889317 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.904091 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8p2t\" (UniqueName: \"kubernetes.io/projected/51bf7ae1-482b-45a8-b540-8282f867b3c8-kube-api-access-j8p2t\") pod \"horizon-68fdb6575c-9fggx\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.939256 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xmgh7"] Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.942680 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.965366 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q866\" (UniqueName: \"kubernetes.io/projected/cbbd26aa-7783-4958-95d0-a590f636947c-kube-api-access-7q866\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.965414 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-config\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.965620 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-db-sync-config-data\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.965644 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.965670 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-config-data\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.981413 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991068 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-run-httpd\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991243 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59nw\" (UniqueName: \"kubernetes.io/projected/7226bf67-7adb-4ce2-b595-957d81002a96-kube-api-access-k59nw\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991307 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-log-httpd\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991481 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-scripts\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991510 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2bv\" (UniqueName: \"kubernetes.io/projected/fc227102-c953-4a8b-bfc2-918b63e457c1-kube-api-access-vb2bv\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991594 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-combined-ca-bundle\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991632 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.991671 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-combined-ca-bundle\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:18 crc kubenswrapper[4688]: I0123 18:27:18.993141 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-log-httpd\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.005361 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-run-httpd\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.006620 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q866\" (UniqueName: \"kubernetes.io/projected/cbbd26aa-7783-4958-95d0-a590f636947c-kube-api-access-7q866\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.030618 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.034231 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9lvbq"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.035164 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-scripts\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.037438 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.039032 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-config-data\") pod \"ceilometer-0\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.092403 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-m28xl"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.095799 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.101339 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.102055 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bzgwv" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.102338 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.117418 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-db-sync-config-data\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.117550 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59nw\" (UniqueName: \"kubernetes.io/projected/7226bf67-7adb-4ce2-b595-957d81002a96-kube-api-access-k59nw\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.117627 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb2bv\" (UniqueName: \"kubernetes.io/projected/fc227102-c953-4a8b-bfc2-918b63e457c1-kube-api-access-vb2bv\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.131277 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-combined-ca-bundle\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.131390 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-combined-ca-bundle\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.131518 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-config\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.139996 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-db-sync-config-data\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.144663 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-combined-ca-bundle\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.145159 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-config\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.150908 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59nw\" (UniqueName: \"kubernetes.io/projected/7226bf67-7adb-4ce2-b595-957d81002a96-kube-api-access-k59nw\") pod \"neutron-db-sync-9lvbq\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.157450 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-combined-ca-bundle\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.164891 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb2bv\" (UniqueName: \"kubernetes.io/projected/fc227102-c953-4a8b-bfc2-918b63e457c1-kube-api-access-vb2bv\") pod \"barbican-db-sync-xmgh7\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.186575 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m28xl"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.239975 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-scripts\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.240038 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e41e2a-24eb-4116-8a8a-35e34558ec71-logs\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.240214 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-config-data\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.240334 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-combined-ca-bundle\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.240371 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb4q\" (UniqueName: \"kubernetes.io/projected/31e41e2a-24eb-4116-8a8a-35e34558ec71-kube-api-access-gkb4q\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.256338 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-bp9tr"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.296640 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-h49m6"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.298877 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.302021 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.332765 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-h49m6"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.361700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-config-data\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.362549 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-combined-ca-bundle\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.362893 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb4q\" (UniqueName: \"kubernetes.io/projected/31e41e2a-24eb-4116-8a8a-35e34558ec71-kube-api-access-gkb4q\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.363328 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-scripts\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.364346 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e41e2a-24eb-4116-8a8a-35e34558ec71-logs\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.371462 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e41e2a-24eb-4116-8a8a-35e34558ec71-logs\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.392327 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.427590 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-combined-ca-bundle\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.427757 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-config-data\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.430672 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-scripts\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.454871 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb4q\" (UniqueName: \"kubernetes.io/projected/31e41e2a-24eb-4116-8a8a-35e34558ec71-kube-api-access-gkb4q\") pod \"placement-db-sync-m28xl\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.468548 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.473728 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqn9x\" (UniqueName: \"kubernetes.io/projected/1cf9be80-df2a-4135-9203-d078ad33acf3-kube-api-access-vqn9x\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.473889 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.473930 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.474010 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.474039 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-config\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.474096 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.478794 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-558dd665cf-xhjvb"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.496251 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-558dd665cf-xhjvb"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.498132 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m28xl" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.505053 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579365 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-config-data\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579454 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqn9x\" (UniqueName: \"kubernetes.io/projected/1cf9be80-df2a-4135-9203-d078ad33acf3-kube-api-access-vqn9x\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579505 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf3d4c-7571-4b15-8b71-2ad279c56c87-logs\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579550 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579585 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579635 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579657 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-scripts\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579687 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-config\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579722 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abcf3d4c-7571-4b15-8b71-2ad279c56c87-horizon-secret-key\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579758 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.579803 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld6mg\" (UniqueName: \"kubernetes.io/projected/abcf3d4c-7571-4b15-8b71-2ad279c56c87-kube-api-access-ld6mg\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.581354 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-config\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.581920 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.582417 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.582950 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.588836 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.595675 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-bp9tr"] Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.611953 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqn9x\" (UniqueName: \"kubernetes.io/projected/1cf9be80-df2a-4135-9203-d078ad33acf3-kube-api-access-vqn9x\") pod \"dnsmasq-dns-58dd9ff6bc-h49m6\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.682508 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-scripts\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.682660 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abcf3d4c-7571-4b15-8b71-2ad279c56c87-horizon-secret-key\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.683776 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld6mg\" (UniqueName: \"kubernetes.io/projected/abcf3d4c-7571-4b15-8b71-2ad279c56c87-kube-api-access-ld6mg\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.683885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-config-data\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.683956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf3d4c-7571-4b15-8b71-2ad279c56c87-logs\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.684555 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf3d4c-7571-4b15-8b71-2ad279c56c87-logs\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.685215 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-scripts\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.696907 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abcf3d4c-7571-4b15-8b71-2ad279c56c87-horizon-secret-key\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.698027 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-config-data\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.716818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld6mg\" (UniqueName: \"kubernetes.io/projected/abcf3d4c-7571-4b15-8b71-2ad279c56c87-kube-api-access-ld6mg\") pod \"horizon-558dd665cf-xhjvb\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.824348 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:19 crc kubenswrapper[4688]: I0123 18:27:19.884060 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.078214 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68fdb6575c-9fggx"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.096740 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.098305 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vsp8t"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.171907 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fdb6575c-9fggx" event={"ID":"51bf7ae1-482b-45a8-b540-8282f867b3c8","Type":"ContainerStarted","Data":"7110f407018500af55c43e50ebae9257c20211a49e7daadfece13ffa03e78c5e"} Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.175317 4688 generic.go:334] "Generic (PLEG): container finished" podID="eb1f5986-e1fc-49b5-aee4-9680a4338e84" containerID="0bca90b8c6eb5c05a5ae5a6c909e55b420c533111174a968ba8fe78b78dc5189" exitCode=0 Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.175486 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" event={"ID":"eb1f5986-e1fc-49b5-aee4-9680a4338e84","Type":"ContainerDied","Data":"0bca90b8c6eb5c05a5ae5a6c909e55b420c533111174a968ba8fe78b78dc5189"} Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.175530 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" event={"ID":"eb1f5986-e1fc-49b5-aee4-9680a4338e84","Type":"ContainerStarted","Data":"6d6d9444b8013b2b988f99bbf90b3382d967cbf48f3db64a376c0b8d2af86b11"} Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.224419 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vsp8t" event={"ID":"b8d25eb5-0041-42b6-8b61-ad9e728c3049","Type":"ContainerStarted","Data":"70b9b926e2e9aa7e2e94519195cc000a324f771c949e87431d22a0e28e611e37"} Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.224628 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" podUID="ce119397-ad11-436e-9349-13c21b15e852" containerName="dnsmasq-dns" containerID="cri-o://01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df" gracePeriod=10 Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.246345 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4bsn8"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.304593 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xmgh7"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.391718 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9lvbq"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.430927 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.491882 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m28xl"] Jan 23 18:27:20 crc kubenswrapper[4688]: W0123 18:27:20.500087 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31e41e2a_24eb_4116_8a8a_35e34558ec71.slice/crio-47249456bc144361787ae380e05d1e0f86420861a4983967fc81848ff0a68d51 WatchSource:0}: Error finding container 47249456bc144361787ae380e05d1e0f86420861a4983967fc81848ff0a68d51: Status 404 returned error can't find the container with id 47249456bc144361787ae380e05d1e0f86420861a4983967fc81848ff0a68d51 Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.642559 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-h49m6"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.662932 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-558dd665cf-xhjvb"] Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.689650 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.826691 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-sb\") pod \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.826938 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-swift-storage-0\") pod \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.826973 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-config\") pod \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.827003 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc\") pod \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.827046 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-nb\") pod \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.827170 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krl97\" (UniqueName: \"kubernetes.io/projected/eb1f5986-e1fc-49b5-aee4-9680a4338e84-kube-api-access-krl97\") pod \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.838584 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1f5986-e1fc-49b5-aee4-9680a4338e84-kube-api-access-krl97" (OuterVolumeSpecName: "kube-api-access-krl97") pod "eb1f5986-e1fc-49b5-aee4-9680a4338e84" (UID: "eb1f5986-e1fc-49b5-aee4-9680a4338e84"). InnerVolumeSpecName "kube-api-access-krl97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.881547 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.882792 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb1f5986-e1fc-49b5-aee4-9680a4338e84" (UID: "eb1f5986-e1fc-49b5-aee4-9680a4338e84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.886358 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb1f5986-e1fc-49b5-aee4-9680a4338e84" (UID: "eb1f5986-e1fc-49b5-aee4-9680a4338e84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.893572 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb1f5986-e1fc-49b5-aee4-9680a4338e84" (UID: "eb1f5986-e1fc-49b5-aee4-9680a4338e84"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.930447 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb1f5986-e1fc-49b5-aee4-9680a4338e84" (UID: "eb1f5986-e1fc-49b5-aee4-9680a4338e84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.931542 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-config\") pod \"ce119397-ad11-436e-9349-13c21b15e852\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.931742 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-sb\") pod \"ce119397-ad11-436e-9349-13c21b15e852\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.931874 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qnvb\" (UniqueName: \"kubernetes.io/projected/ce119397-ad11-436e-9349-13c21b15e852-kube-api-access-4qnvb\") pod \"ce119397-ad11-436e-9349-13c21b15e852\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.932130 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc\") pod \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\" (UID: \"eb1f5986-e1fc-49b5-aee4-9680a4338e84\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.932220 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-svc\") pod \"ce119397-ad11-436e-9349-13c21b15e852\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.932260 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-nb\") pod \"ce119397-ad11-436e-9349-13c21b15e852\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.932299 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-swift-storage-0\") pod \"ce119397-ad11-436e-9349-13c21b15e852\" (UID: \"ce119397-ad11-436e-9349-13c21b15e852\") " Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.933759 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krl97\" (UniqueName: \"kubernetes.io/projected/eb1f5986-e1fc-49b5-aee4-9680a4338e84-kube-api-access-krl97\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.933786 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.933797 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.933811 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:20 crc kubenswrapper[4688]: W0123 18:27:20.934433 4688 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/eb1f5986-e1fc-49b5-aee4-9680a4338e84/volumes/kubernetes.io~configmap/dns-svc Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.934456 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb1f5986-e1fc-49b5-aee4-9680a4338e84" (UID: "eb1f5986-e1fc-49b5-aee4-9680a4338e84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.938263 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce119397-ad11-436e-9349-13c21b15e852-kube-api-access-4qnvb" (OuterVolumeSpecName: "kube-api-access-4qnvb") pod "ce119397-ad11-436e-9349-13c21b15e852" (UID: "ce119397-ad11-436e-9349-13c21b15e852"). InnerVolumeSpecName "kube-api-access-4qnvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:20 crc kubenswrapper[4688]: I0123 18:27:20.968630 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-config" (OuterVolumeSpecName: "config") pod "eb1f5986-e1fc-49b5-aee4-9680a4338e84" (UID: "eb1f5986-e1fc-49b5-aee4-9680a4338e84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:20.999843 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce119397-ad11-436e-9349-13c21b15e852" (UID: "ce119397-ad11-436e-9349-13c21b15e852"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.031262 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce119397-ad11-436e-9349-13c21b15e852" (UID: "ce119397-ad11-436e-9349-13c21b15e852"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.042251 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qnvb\" (UniqueName: \"kubernetes.io/projected/ce119397-ad11-436e-9349-13c21b15e852-kube-api-access-4qnvb\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.042287 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.042317 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb1f5986-e1fc-49b5-aee4-9680a4338e84-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.042329 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.042339 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.048797 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-config" (OuterVolumeSpecName: "config") pod "ce119397-ad11-436e-9349-13c21b15e852" (UID: "ce119397-ad11-436e-9349-13c21b15e852"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.056155 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce119397-ad11-436e-9349-13c21b15e852" (UID: "ce119397-ad11-436e-9349-13c21b15e852"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.124298 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce119397-ad11-436e-9349-13c21b15e852" (UID: "ce119397-ad11-436e-9349-13c21b15e852"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.148373 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.148410 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.148423 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce119397-ad11-436e-9349-13c21b15e852-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.269027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" event={"ID":"1cf9be80-df2a-4135-9203-d078ad33acf3","Type":"ContainerStarted","Data":"f223734ff0dd4d800dea567c65d4a106ab966c0cd0775f594de0cba2c37f543e"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.287506 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbbd26aa-7783-4958-95d0-a590f636947c","Type":"ContainerStarted","Data":"e18fa56c4182bc56c5f2e3b93cd39faccc72cb4dda9a8840a61d65ce391928ec"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.301480 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.315502 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558dd665cf-xhjvb" event={"ID":"abcf3d4c-7571-4b15-8b71-2ad279c56c87","Type":"ContainerStarted","Data":"daeb51e104ca5c5fc510ead2f63ea721e51b847e8afe70cc4a21063348e4e0a6"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.337767 4688 generic.go:334] "Generic (PLEG): container finished" podID="ce119397-ad11-436e-9349-13c21b15e852" containerID="01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df" exitCode=0 Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.337911 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" event={"ID":"ce119397-ad11-436e-9349-13c21b15e852","Type":"ContainerDied","Data":"01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.337950 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" event={"ID":"ce119397-ad11-436e-9349-13c21b15e852","Type":"ContainerDied","Data":"f661c662e5d67a7e183bacefeb7b079951e1df083fe5d5c5e51b5abcdaad9e14"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.337970 4688 scope.go:117] "RemoveContainer" containerID="01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.338178 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-68v7f" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.344579 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-558dd665cf-xhjvb"] Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.527830 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f84479849-glxjc"] Jan 23 18:27:21 crc kubenswrapper[4688]: E0123 18:27:21.529045 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce119397-ad11-436e-9349-13c21b15e852" containerName="init" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.529082 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce119397-ad11-436e-9349-13c21b15e852" containerName="init" Jan 23 18:27:21 crc kubenswrapper[4688]: E0123 18:27:21.529110 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1f5986-e1fc-49b5-aee4-9680a4338e84" containerName="init" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.529120 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1f5986-e1fc-49b5-aee4-9680a4338e84" containerName="init" Jan 23 18:27:21 crc kubenswrapper[4688]: E0123 18:27:21.529155 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce119397-ad11-436e-9349-13c21b15e852" containerName="dnsmasq-dns" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.529166 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce119397-ad11-436e-9349-13c21b15e852" containerName="dnsmasq-dns" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.529714 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce119397-ad11-436e-9349-13c21b15e852" containerName="dnsmasq-dns" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.529772 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1f5986-e1fc-49b5-aee4-9680a4338e84" containerName="init" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.532050 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m28xl" event={"ID":"31e41e2a-24eb-4116-8a8a-35e34558ec71","Type":"ContainerStarted","Data":"47249456bc144361787ae380e05d1e0f86420861a4983967fc81848ff0a68d51"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.532118 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmgh7" event={"ID":"fc227102-c953-4a8b-bfc2-918b63e457c1","Type":"ContainerStarted","Data":"1b84cde335a72c60effc29aabcf36d0b317bd63a34b498c615cb9403cec1f65c"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.532325 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.581639 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9lvbq" event={"ID":"7226bf67-7adb-4ce2-b595-957d81002a96","Type":"ContainerStarted","Data":"356b4164f0ea8137384f762b11a26da39f79f0cbd7592fd69b395ce91bbe8925"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.581726 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9lvbq" event={"ID":"7226bf67-7adb-4ce2-b595-957d81002a96","Type":"ContainerStarted","Data":"ead71136a3f370d7c9ba359dfccc3aef95ee7e1a324b09e24957b2645b1fc4f7"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.583150 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdqz\" (UniqueName: \"kubernetes.io/projected/c4a402bb-fae6-4f62-b956-eca577195a79-kube-api-access-xgdqz\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.583232 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4a402bb-fae6-4f62-b956-eca577195a79-horizon-secret-key\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.583287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4a402bb-fae6-4f62-b956-eca577195a79-logs\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.583348 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.583386 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-config-data\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.610799 4688 scope.go:117] "RemoveContainer" containerID="24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.615327 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bsn8" event={"ID":"90106b59-2826-4770-8211-3ff275cb56fa","Type":"ContainerStarted","Data":"588d2b239a1b6626600028ec1b36b214f98f0d93d5e0cef36b02110021a836c2"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.615375 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bsn8" event={"ID":"90106b59-2826-4770-8211-3ff275cb56fa","Type":"ContainerStarted","Data":"de83ccaa6723beab2b94a30e663ee88cf5ac2388e70ca5dcc90fe0a3b26496b7"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.635840 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" event={"ID":"eb1f5986-e1fc-49b5-aee4-9680a4338e84","Type":"ContainerDied","Data":"6d6d9444b8013b2b988f99bbf90b3382d967cbf48f3db64a376c0b8d2af86b11"} Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.636021 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-bp9tr" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.643158 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f84479849-glxjc"] Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.665332 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-68v7f"] Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.686785 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4a402bb-fae6-4f62-b956-eca577195a79-logs\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.686957 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.687030 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-config-data\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.687155 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdqz\" (UniqueName: \"kubernetes.io/projected/c4a402bb-fae6-4f62-b956-eca577195a79-kube-api-access-xgdqz\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.687259 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4a402bb-fae6-4f62-b956-eca577195a79-horizon-secret-key\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.688769 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4a402bb-fae6-4f62-b956-eca577195a79-logs\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.690328 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-68v7f"] Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.690395 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.691712 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-config-data\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.695339 4688 scope.go:117] "RemoveContainer" containerID="01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df" Jan 23 18:27:21 crc kubenswrapper[4688]: E0123 18:27:21.696371 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df\": container with ID starting with 01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df not found: ID does not exist" containerID="01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.696416 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df"} err="failed to get container status \"01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df\": rpc error: code = NotFound desc = could not find container \"01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df\": container with ID starting with 01c523350379bed789522e904bf2d19ba383b937b7d870e209d185cabcb038df not found: ID does not exist" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.696445 4688 scope.go:117] "RemoveContainer" containerID="24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699" Jan 23 18:27:21 crc kubenswrapper[4688]: E0123 18:27:21.697761 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699\": container with ID starting with 24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699 not found: ID does not exist" containerID="24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.697820 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699"} err="failed to get container status \"24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699\": rpc error: code = NotFound desc = could not find container \"24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699\": container with ID starting with 24deaf10042a05d0ee1e0fa7e61e708e6e0b526f2f57e01ec82c3c2f3a00d699 not found: ID does not exist" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.697856 4688 scope.go:117] "RemoveContainer" containerID="0bca90b8c6eb5c05a5ae5a6c909e55b420c533111174a968ba8fe78b78dc5189" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.719905 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9lvbq" podStartSLOduration=3.719882606 podStartE2EDuration="3.719882606s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:21.644873445 +0000 UTC m=+1236.640697896" watchObservedRunningTime="2026-01-23 18:27:21.719882606 +0000 UTC m=+1236.715707057" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.734715 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdqz\" (UniqueName: \"kubernetes.io/projected/c4a402bb-fae6-4f62-b956-eca577195a79-kube-api-access-xgdqz\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.739487 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4a402bb-fae6-4f62-b956-eca577195a79-horizon-secret-key\") pod \"horizon-6f84479849-glxjc\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.740671 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4bsn8" podStartSLOduration=3.7406423540000002 podStartE2EDuration="3.740642354s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:21.695705749 +0000 UTC m=+1236.691530190" watchObservedRunningTime="2026-01-23 18:27:21.740642354 +0000 UTC m=+1236.736466795" Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.793126 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-bp9tr"] Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.805371 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-bp9tr"] Jan 23 18:27:21 crc kubenswrapper[4688]: I0123 18:27:21.926741 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:27:22 crc kubenswrapper[4688]: I0123 18:27:22.613066 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f84479849-glxjc"] Jan 23 18:27:22 crc kubenswrapper[4688]: W0123 18:27:22.636757 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4a402bb_fae6_4f62_b956_eca577195a79.slice/crio-df52a345b04c76e97e1bf24061d563555f3668fad365287cffb9a6eb68dabb55 WatchSource:0}: Error finding container df52a345b04c76e97e1bf24061d563555f3668fad365287cffb9a6eb68dabb55: Status 404 returned error can't find the container with id df52a345b04c76e97e1bf24061d563555f3668fad365287cffb9a6eb68dabb55 Jan 23 18:27:22 crc kubenswrapper[4688]: I0123 18:27:22.653737 4688 generic.go:334] "Generic (PLEG): container finished" podID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerID="a1c63aad83adb6ee4c94a456154d7990b526246afb1deaf8bcd78962b1eb292c" exitCode=0 Jan 23 18:27:22 crc kubenswrapper[4688]: I0123 18:27:22.653856 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" event={"ID":"1cf9be80-df2a-4135-9203-d078ad33acf3","Type":"ContainerDied","Data":"a1c63aad83adb6ee4c94a456154d7990b526246afb1deaf8bcd78962b1eb292c"} Jan 23 18:27:23 crc kubenswrapper[4688]: I0123 18:27:23.388018 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce119397-ad11-436e-9349-13c21b15e852" path="/var/lib/kubelet/pods/ce119397-ad11-436e-9349-13c21b15e852/volumes" Jan 23 18:27:23 crc kubenswrapper[4688]: I0123 18:27:23.389423 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb1f5986-e1fc-49b5-aee4-9680a4338e84" path="/var/lib/kubelet/pods/eb1f5986-e1fc-49b5-aee4-9680a4338e84/volumes" Jan 23 18:27:23 crc kubenswrapper[4688]: I0123 18:27:23.683067 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" event={"ID":"1cf9be80-df2a-4135-9203-d078ad33acf3","Type":"ContainerStarted","Data":"d972dcd52c935b6f7acfefba49d9be6e03ce424f41c4649ad2798ea305751bc1"} Jan 23 18:27:23 crc kubenswrapper[4688]: I0123 18:27:23.683464 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:23 crc kubenswrapper[4688]: I0123 18:27:23.686227 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f84479849-glxjc" event={"ID":"c4a402bb-fae6-4f62-b956-eca577195a79","Type":"ContainerStarted","Data":"df52a345b04c76e97e1bf24061d563555f3668fad365287cffb9a6eb68dabb55"} Jan 23 18:27:23 crc kubenswrapper[4688]: I0123 18:27:23.712109 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" podStartSLOduration=5.712077287 podStartE2EDuration="5.712077287s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:23.707404943 +0000 UTC m=+1238.703229394" watchObservedRunningTime="2026-01-23 18:27:23.712077287 +0000 UTC m=+1238.707901728" Jan 23 18:27:24 crc kubenswrapper[4688]: I0123 18:27:24.715292 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-94mh9" event={"ID":"ea982eec-acb6-45c7-8f69-36df2323747c","Type":"ContainerStarted","Data":"2f82f78969f6a901660fff71f27b953a7917931860e7f1b20bfaaa60c737f518"} Jan 23 18:27:24 crc kubenswrapper[4688]: I0123 18:27:24.752757 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-94mh9" podStartSLOduration=7.158035216 podStartE2EDuration="46.752662832s" podCreationTimestamp="2026-01-23 18:26:38 +0000 UTC" firstStartedPulling="2026-01-23 18:26:43.889380661 +0000 UTC m=+1198.885205102" lastFinishedPulling="2026-01-23 18:27:23.484008277 +0000 UTC m=+1238.479832718" observedRunningTime="2026-01-23 18:27:24.741530261 +0000 UTC m=+1239.737354702" watchObservedRunningTime="2026-01-23 18:27:24.752662832 +0000 UTC m=+1239.748487273" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.851435 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68fdb6575c-9fggx"] Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.914849 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c854fbb9b-lr4lr"] Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.925998 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.929084 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.935960 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c854fbb9b-lr4lr"] Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.983332 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6dm4\" (UniqueName: \"kubernetes.io/projected/d7828699-c881-4ed8-a26a-9837e4dbb301-kube-api-access-c6dm4\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.983437 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-combined-ca-bundle\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.983512 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7828699-c881-4ed8-a26a-9837e4dbb301-logs\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.983655 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-scripts\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.983690 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-config-data\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.983715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-secret-key\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:27 crc kubenswrapper[4688]: I0123 18:27:27.983737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-tls-certs\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.011677 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f84479849-glxjc"] Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.065441 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-689f6b4f86-pbwfh"] Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.069165 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.085327 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-scripts\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.085436 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-config-data\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.085480 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-secret-key\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.085508 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-tls-certs\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.085624 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6dm4\" (UniqueName: \"kubernetes.io/projected/d7828699-c881-4ed8-a26a-9837e4dbb301-kube-api-access-c6dm4\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.085683 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-combined-ca-bundle\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.085757 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7828699-c881-4ed8-a26a-9837e4dbb301-logs\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.086214 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7828699-c881-4ed8-a26a-9837e4dbb301-logs\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.086887 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-scripts\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.087715 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-689f6b4f86-pbwfh"] Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.089098 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-config-data\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.096686 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-secret-key\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.107074 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-tls-certs\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.111783 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-combined-ca-bundle\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.125073 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6dm4\" (UniqueName: \"kubernetes.io/projected/d7828699-c881-4ed8-a26a-9837e4dbb301-kube-api-access-c6dm4\") pod \"horizon-c854fbb9b-lr4lr\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.189596 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfs7f\" (UniqueName: \"kubernetes.io/projected/56f27597-f638-4b6d-84e9-3a3671c089ac-kube-api-access-dfs7f\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.189770 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-combined-ca-bundle\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.189830 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-horizon-secret-key\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.189871 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f27597-f638-4b6d-84e9-3a3671c089ac-config-data\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.189918 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f27597-f638-4b6d-84e9-3a3671c089ac-logs\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.189980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-horizon-tls-certs\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.190256 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56f27597-f638-4b6d-84e9-3a3671c089ac-scripts\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.258145 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.292137 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56f27597-f638-4b6d-84e9-3a3671c089ac-scripts\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.292550 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfs7f\" (UniqueName: \"kubernetes.io/projected/56f27597-f638-4b6d-84e9-3a3671c089ac-kube-api-access-dfs7f\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.292784 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-combined-ca-bundle\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.292893 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-horizon-secret-key\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.293001 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f27597-f638-4b6d-84e9-3a3671c089ac-config-data\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.293138 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f27597-f638-4b6d-84e9-3a3671c089ac-logs\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.293328 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-horizon-tls-certs\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.294475 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56f27597-f638-4b6d-84e9-3a3671c089ac-scripts\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.296658 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56f27597-f638-4b6d-84e9-3a3671c089ac-config-data\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.297300 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f27597-f638-4b6d-84e9-3a3671c089ac-logs\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.303282 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-horizon-tls-certs\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.303493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-combined-ca-bundle\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.303920 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56f27597-f638-4b6d-84e9-3a3671c089ac-horizon-secret-key\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.319052 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfs7f\" (UniqueName: \"kubernetes.io/projected/56f27597-f638-4b6d-84e9-3a3671c089ac-kube-api-access-dfs7f\") pod \"horizon-689f6b4f86-pbwfh\" (UID: \"56f27597-f638-4b6d-84e9-3a3671c089ac\") " pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:28 crc kubenswrapper[4688]: I0123 18:27:28.495824 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:29 crc kubenswrapper[4688]: I0123 18:27:29.825359 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:27:29 crc kubenswrapper[4688]: I0123 18:27:29.900829 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jrlkz"] Jan 23 18:27:29 crc kubenswrapper[4688]: I0123 18:27:29.901116 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-jrlkz" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" containerID="cri-o://31502d29ac503f003a0dfb9f7b37d7e2e3fce08fdbbaeaefad880f0af1304fec" gracePeriod=10 Jan 23 18:27:30 crc kubenswrapper[4688]: I0123 18:27:30.794379 4688 generic.go:334] "Generic (PLEG): container finished" podID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerID="31502d29ac503f003a0dfb9f7b37d7e2e3fce08fdbbaeaefad880f0af1304fec" exitCode=0 Jan 23 18:27:30 crc kubenswrapper[4688]: I0123 18:27:30.794446 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jrlkz" event={"ID":"37ba61eb-0e82-4af5-8756-cc56550dd6ed","Type":"ContainerDied","Data":"31502d29ac503f003a0dfb9f7b37d7e2e3fce08fdbbaeaefad880f0af1304fec"} Jan 23 18:27:31 crc kubenswrapper[4688]: I0123 18:27:31.811841 4688 generic.go:334] "Generic (PLEG): container finished" podID="90106b59-2826-4770-8211-3ff275cb56fa" containerID="588d2b239a1b6626600028ec1b36b214f98f0d93d5e0cef36b02110021a836c2" exitCode=0 Jan 23 18:27:31 crc kubenswrapper[4688]: I0123 18:27:31.811918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bsn8" event={"ID":"90106b59-2826-4770-8211-3ff275cb56fa","Type":"ContainerDied","Data":"588d2b239a1b6626600028ec1b36b214f98f0d93d5e0cef36b02110021a836c2"} Jan 23 18:27:33 crc kubenswrapper[4688]: I0123 18:27:33.838768 4688 generic.go:334] "Generic (PLEG): container finished" podID="ea982eec-acb6-45c7-8f69-36df2323747c" containerID="2f82f78969f6a901660fff71f27b953a7917931860e7f1b20bfaaa60c737f518" exitCode=0 Jan 23 18:27:33 crc kubenswrapper[4688]: I0123 18:27:33.838865 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-94mh9" event={"ID":"ea982eec-acb6-45c7-8f69-36df2323747c","Type":"ContainerDied","Data":"2f82f78969f6a901660fff71f27b953a7917931860e7f1b20bfaaa60c737f518"} Jan 23 18:27:33 crc kubenswrapper[4688]: I0123 18:27:33.843806 4688 generic.go:334] "Generic (PLEG): container finished" podID="620ac0a5-247a-4207-83e0-d6776834d4ad" containerID="f2bd73f8aadf30071096c98a62dd31573c124f2a1985baff609e903d5d7f7172" exitCode=0 Jan 23 18:27:33 crc kubenswrapper[4688]: I0123 18:27:33.843863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wcz56" event={"ID":"620ac0a5-247a-4207-83e0-d6776834d4ad","Type":"ContainerDied","Data":"f2bd73f8aadf30071096c98a62dd31573c124f2a1985baff609e903d5d7f7172"} Jan 23 18:27:33 crc kubenswrapper[4688]: I0123 18:27:33.921264 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jrlkz" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 23 18:27:36 crc kubenswrapper[4688]: I0123 18:27:36.965068 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:27:36 crc kubenswrapper[4688]: I0123 18:27:36.965853 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:27:38 crc kubenswrapper[4688]: E0123 18:27:38.430723 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 23 18:27:38 crc kubenswrapper[4688]: E0123 18:27:38.431367 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkb4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-m28xl_openstack(31e41e2a-24eb-4116-8a8a-35e34558ec71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:27:38 crc kubenswrapper[4688]: E0123 18:27:38.432613 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-m28xl" podUID="31e41e2a-24eb-4116-8a8a-35e34558ec71" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.537335 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.734404 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-fernet-keys\") pod \"90106b59-2826-4770-8211-3ff275cb56fa\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.734505 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-combined-ca-bundle\") pod \"90106b59-2826-4770-8211-3ff275cb56fa\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.734597 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-config-data\") pod \"90106b59-2826-4770-8211-3ff275cb56fa\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.734650 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-credential-keys\") pod \"90106b59-2826-4770-8211-3ff275cb56fa\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.734842 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-scripts\") pod \"90106b59-2826-4770-8211-3ff275cb56fa\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.734887 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgrgw\" (UniqueName: \"kubernetes.io/projected/90106b59-2826-4770-8211-3ff275cb56fa-kube-api-access-mgrgw\") pod \"90106b59-2826-4770-8211-3ff275cb56fa\" (UID: \"90106b59-2826-4770-8211-3ff275cb56fa\") " Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.744661 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-scripts" (OuterVolumeSpecName: "scripts") pod "90106b59-2826-4770-8211-3ff275cb56fa" (UID: "90106b59-2826-4770-8211-3ff275cb56fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.744715 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "90106b59-2826-4770-8211-3ff275cb56fa" (UID: "90106b59-2826-4770-8211-3ff275cb56fa"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.747411 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90106b59-2826-4770-8211-3ff275cb56fa-kube-api-access-mgrgw" (OuterVolumeSpecName: "kube-api-access-mgrgw") pod "90106b59-2826-4770-8211-3ff275cb56fa" (UID: "90106b59-2826-4770-8211-3ff275cb56fa"). InnerVolumeSpecName "kube-api-access-mgrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.750302 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "90106b59-2826-4770-8211-3ff275cb56fa" (UID: "90106b59-2826-4770-8211-3ff275cb56fa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.761458 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-config-data" (OuterVolumeSpecName: "config-data") pod "90106b59-2826-4770-8211-3ff275cb56fa" (UID: "90106b59-2826-4770-8211-3ff275cb56fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.769065 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90106b59-2826-4770-8211-3ff275cb56fa" (UID: "90106b59-2826-4770-8211-3ff275cb56fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.838170 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.838261 4688 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.838277 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.838288 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgrgw\" (UniqueName: \"kubernetes.io/projected/90106b59-2826-4770-8211-3ff275cb56fa-kube-api-access-mgrgw\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.838300 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.838310 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90106b59-2826-4770-8211-3ff275cb56fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.899400 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4bsn8" event={"ID":"90106b59-2826-4770-8211-3ff275cb56fa","Type":"ContainerDied","Data":"de83ccaa6723beab2b94a30e663ee88cf5ac2388e70ca5dcc90fe0a3b26496b7"} Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.899446 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de83ccaa6723beab2b94a30e663ee88cf5ac2388e70ca5dcc90fe0a3b26496b7" Jan 23 18:27:38 crc kubenswrapper[4688]: I0123 18:27:38.899483 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4bsn8" Jan 23 18:27:38 crc kubenswrapper[4688]: E0123 18:27:38.902261 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-m28xl" podUID="31e41e2a-24eb-4116-8a8a-35e34558ec71" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.629918 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4bsn8"] Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.644107 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4bsn8"] Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.725531 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6fttt"] Jan 23 18:27:39 crc kubenswrapper[4688]: E0123 18:27:39.726170 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90106b59-2826-4770-8211-3ff275cb56fa" containerName="keystone-bootstrap" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.726211 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="90106b59-2826-4770-8211-3ff275cb56fa" containerName="keystone-bootstrap" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.726418 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="90106b59-2826-4770-8211-3ff275cb56fa" containerName="keystone-bootstrap" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.727367 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.730565 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.730758 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.730828 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ttwkl" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.730937 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.732112 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.738825 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6fttt"] Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.761346 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-config-data\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.761477 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-scripts\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.761529 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-combined-ca-bundle\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.761571 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-credential-keys\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.761607 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-fernet-keys\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.761885 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4k6r\" (UniqueName: \"kubernetes.io/projected/fa85f4c3-ac71-4df0-be19-d498bad38459-kube-api-access-g4k6r\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.863419 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-combined-ca-bundle\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.863480 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-credential-keys\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.863514 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-fernet-keys\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.863635 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4k6r\" (UniqueName: \"kubernetes.io/projected/fa85f4c3-ac71-4df0-be19-d498bad38459-kube-api-access-g4k6r\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.863700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-config-data\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.863753 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-scripts\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.868795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-config-data\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.879395 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-credential-keys\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.879603 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-combined-ca-bundle\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.882731 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-fernet-keys\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.883698 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-scripts\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:39 crc kubenswrapper[4688]: I0123 18:27:39.884001 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4k6r\" (UniqueName: \"kubernetes.io/projected/fa85f4c3-ac71-4df0-be19-d498bad38459-kube-api-access-g4k6r\") pod \"keystone-bootstrap-6fttt\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:40 crc kubenswrapper[4688]: I0123 18:27:40.055475 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:27:41 crc kubenswrapper[4688]: I0123 18:27:41.373387 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90106b59-2826-4770-8211-3ff275cb56fa" path="/var/lib/kubelet/pods/90106b59-2826-4770-8211-3ff275cb56fa/volumes" Jan 23 18:27:43 crc kubenswrapper[4688]: I0123 18:27:43.921121 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jrlkz" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Jan 23 18:27:48 crc kubenswrapper[4688]: E0123 18:27:48.340560 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 23 18:27:48 crc kubenswrapper[4688]: E0123 18:27:48.341944 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7h62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vsp8t_openstack(b8d25eb5-0041-42b6-8b61-ad9e728c3049): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:27:48 crc kubenswrapper[4688]: E0123 18:27:48.344967 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vsp8t" podUID="b8d25eb5-0041-42b6-8b61-ad9e728c3049" Jan 23 18:27:48 crc kubenswrapper[4688]: I0123 18:27:48.921744 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jrlkz" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Jan 23 18:27:48 crc kubenswrapper[4688]: I0123 18:27:48.922286 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:27:49 crc kubenswrapper[4688]: E0123 18:27:49.037683 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vsp8t" podUID="b8d25eb5-0041-42b6-8b61-ad9e728c3049" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.703801 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wcz56" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.718692 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-94mh9" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.819224 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8pxj\" (UniqueName: \"kubernetes.io/projected/620ac0a5-247a-4207-83e0-d6776834d4ad-kube-api-access-x8pxj\") pod \"620ac0a5-247a-4207-83e0-d6776834d4ad\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.819381 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-combined-ca-bundle\") pod \"620ac0a5-247a-4207-83e0-d6776834d4ad\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.819416 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-db-sync-config-data\") pod \"620ac0a5-247a-4207-83e0-d6776834d4ad\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.819451 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-config-data\") pod \"620ac0a5-247a-4207-83e0-d6776834d4ad\" (UID: \"620ac0a5-247a-4207-83e0-d6776834d4ad\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.824896 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620ac0a5-247a-4207-83e0-d6776834d4ad-kube-api-access-x8pxj" (OuterVolumeSpecName: "kube-api-access-x8pxj") pod "620ac0a5-247a-4207-83e0-d6776834d4ad" (UID: "620ac0a5-247a-4207-83e0-d6776834d4ad"). InnerVolumeSpecName "kube-api-access-x8pxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.830133 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "620ac0a5-247a-4207-83e0-d6776834d4ad" (UID: "620ac0a5-247a-4207-83e0-d6776834d4ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.854397 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "620ac0a5-247a-4207-83e0-d6776834d4ad" (UID: "620ac0a5-247a-4207-83e0-d6776834d4ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.879391 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-config-data" (OuterVolumeSpecName: "config-data") pod "620ac0a5-247a-4207-83e0-d6776834d4ad" (UID: "620ac0a5-247a-4207-83e0-d6776834d4ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.921287 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jggfx\" (UniqueName: \"kubernetes.io/projected/ea982eec-acb6-45c7-8f69-36df2323747c-kube-api-access-jggfx\") pod \"ea982eec-acb6-45c7-8f69-36df2323747c\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.921352 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-db-sync-config-data\") pod \"ea982eec-acb6-45c7-8f69-36df2323747c\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.921440 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-combined-ca-bundle\") pod \"ea982eec-acb6-45c7-8f69-36df2323747c\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.921519 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-config-data\") pod \"ea982eec-acb6-45c7-8f69-36df2323747c\" (UID: \"ea982eec-acb6-45c7-8f69-36df2323747c\") " Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.922093 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.922113 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.922124 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/620ac0a5-247a-4207-83e0-d6776834d4ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.922133 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8pxj\" (UniqueName: \"kubernetes.io/projected/620ac0a5-247a-4207-83e0-d6776834d4ad-kube-api-access-x8pxj\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.925218 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ea982eec-acb6-45c7-8f69-36df2323747c" (UID: "ea982eec-acb6-45c7-8f69-36df2323747c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.925289 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea982eec-acb6-45c7-8f69-36df2323747c-kube-api-access-jggfx" (OuterVolumeSpecName: "kube-api-access-jggfx") pod "ea982eec-acb6-45c7-8f69-36df2323747c" (UID: "ea982eec-acb6-45c7-8f69-36df2323747c"). InnerVolumeSpecName "kube-api-access-jggfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.948036 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea982eec-acb6-45c7-8f69-36df2323747c" (UID: "ea982eec-acb6-45c7-8f69-36df2323747c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:50 crc kubenswrapper[4688]: I0123 18:27:50.969075 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-config-data" (OuterVolumeSpecName: "config-data") pod "ea982eec-acb6-45c7-8f69-36df2323747c" (UID: "ea982eec-acb6-45c7-8f69-36df2323747c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.024623 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.024670 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.024685 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jggfx\" (UniqueName: \"kubernetes.io/projected/ea982eec-acb6-45c7-8f69-36df2323747c-kube-api-access-jggfx\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.024704 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ea982eec-acb6-45c7-8f69-36df2323747c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.055154 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-94mh9" event={"ID":"ea982eec-acb6-45c7-8f69-36df2323747c","Type":"ContainerDied","Data":"51aee3656b5089d30d3e12fb7d57a6fd82e6a13d79a6dd0c8efb4bc047a20d33"} Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.055260 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51aee3656b5089d30d3e12fb7d57a6fd82e6a13d79a6dd0c8efb4bc047a20d33" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.055364 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-94mh9" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.057700 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wcz56" event={"ID":"620ac0a5-247a-4207-83e0-d6776834d4ad","Type":"ContainerDied","Data":"31a71f3c92b29426a5957a90cb6802f908bad2969071acb6a4612f1505185f73"} Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.057753 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a71f3c92b29426a5957a90cb6802f908bad2969071acb6a4612f1505185f73" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.057760 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wcz56" Jan 23 18:27:51 crc kubenswrapper[4688]: E0123 18:27:51.355655 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 23 18:27:51 crc kubenswrapper[4688]: E0123 18:27:51.355824 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vb2bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xmgh7_openstack(fc227102-c953-4a8b-bfc2-918b63e457c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:27:51 crc kubenswrapper[4688]: E0123 18:27:51.357074 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xmgh7" podUID="fc227102-c953-4a8b-bfc2-918b63e457c1" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.399570 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.534706 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-dns-svc\") pod \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.534752 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp2nw\" (UniqueName: \"kubernetes.io/projected/37ba61eb-0e82-4af5-8756-cc56550dd6ed-kube-api-access-tp2nw\") pod \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.534874 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-nb\") pod \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.535009 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-config\") pod \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.535040 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-sb\") pod \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\" (UID: \"37ba61eb-0e82-4af5-8756-cc56550dd6ed\") " Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.540410 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ba61eb-0e82-4af5-8756-cc56550dd6ed-kube-api-access-tp2nw" (OuterVolumeSpecName: "kube-api-access-tp2nw") pod "37ba61eb-0e82-4af5-8756-cc56550dd6ed" (UID: "37ba61eb-0e82-4af5-8756-cc56550dd6ed"). InnerVolumeSpecName "kube-api-access-tp2nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.577170 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-config" (OuterVolumeSpecName: "config") pod "37ba61eb-0e82-4af5-8756-cc56550dd6ed" (UID: "37ba61eb-0e82-4af5-8756-cc56550dd6ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.579794 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37ba61eb-0e82-4af5-8756-cc56550dd6ed" (UID: "37ba61eb-0e82-4af5-8756-cc56550dd6ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.580825 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37ba61eb-0e82-4af5-8756-cc56550dd6ed" (UID: "37ba61eb-0e82-4af5-8756-cc56550dd6ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.583459 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37ba61eb-0e82-4af5-8756-cc56550dd6ed" (UID: "37ba61eb-0e82-4af5-8756-cc56550dd6ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.637258 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.637488 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.637585 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.637644 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp2nw\" (UniqueName: \"kubernetes.io/projected/37ba61eb-0e82-4af5-8756-cc56550dd6ed-kube-api-access-tp2nw\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: I0123 18:27:51.637712 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ba61eb-0e82-4af5-8756-cc56550dd6ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:27:51 crc kubenswrapper[4688]: E0123 18:27:51.867797 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 23 18:27:51 crc kubenswrapper[4688]: E0123 18:27:51.868020 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b7h645h646h8h648h8h598h597h5fdh6bh566h86h56ch655h56fhb4h56h546h75h599h9h5b8h559hbhfhbdh688h695hb5h648h7fh5ffq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7q866,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cbbd26aa-7783-4958-95d0-a590f636947c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.043345 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:27:52 crc kubenswrapper[4688]: E0123 18:27:52.043922 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="init" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.043948 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="init" Jan 23 18:27:52 crc kubenswrapper[4688]: E0123 18:27:52.043966 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620ac0a5-247a-4207-83e0-d6776834d4ad" containerName="glance-db-sync" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.043974 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="620ac0a5-247a-4207-83e0-d6776834d4ad" containerName="glance-db-sync" Jan 23 18:27:52 crc kubenswrapper[4688]: E0123 18:27:52.044002 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.044010 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" Jan 23 18:27:52 crc kubenswrapper[4688]: E0123 18:27:52.044035 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea982eec-acb6-45c7-8f69-36df2323747c" containerName="watcher-db-sync" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.044042 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea982eec-acb6-45c7-8f69-36df2323747c" containerName="watcher-db-sync" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.044273 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="620ac0a5-247a-4207-83e0-d6776834d4ad" containerName="glance-db-sync" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.044299 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea982eec-acb6-45c7-8f69-36df2323747c" containerName="watcher-db-sync" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.044310 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.045528 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.048569 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.055151 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-jznqw" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.063105 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.131917 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-jrlkz" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.133124 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-jrlkz" event={"ID":"37ba61eb-0e82-4af5-8756-cc56550dd6ed","Type":"ContainerDied","Data":"aed0d649cccaa5ebbc9920c5a4cc95b878fa4088ae23f95e5d2d735ed40e13ad"} Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.133332 4688 scope.go:117] "RemoveContainer" containerID="31502d29ac503f003a0dfb9f7b37d7e2e3fce08fdbbaeaefad880f0af1304fec" Jan 23 18:27:52 crc kubenswrapper[4688]: E0123 18:27:52.140267 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xmgh7" podUID="fc227102-c953-4a8b-bfc2-918b63e457c1" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.149632 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69lx\" (UniqueName: \"kubernetes.io/projected/f3fccf89-b010-4ac7-8566-83b3704ef12e-kube-api-access-f69lx\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.149706 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.149739 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-config-data\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.149784 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3fccf89-b010-4ac7-8566-83b3704ef12e-logs\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.149851 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.182431 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.192820 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.211657 4688 scope.go:117] "RemoveContainer" containerID="67cd1110801959ec33e93d47acb0ec7095ed8383c356189d5d0b7c26fa3176c9" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.211937 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.268376 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.268714 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69lx\" (UniqueName: \"kubernetes.io/projected/f3fccf89-b010-4ac7-8566-83b3704ef12e-kube-api-access-f69lx\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.268830 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.268885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-config-data\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.269009 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3fccf89-b010-4ac7-8566-83b3704ef12e-logs\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.269618 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3fccf89-b010-4ac7-8566-83b3704ef12e-logs\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.276813 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.305986 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.321703 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jrlkz"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.334511 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69lx\" (UniqueName: \"kubernetes.io/projected/f3fccf89-b010-4ac7-8566-83b3704ef12e-kube-api-access-f69lx\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.338903 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-config-data\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.346331 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-jrlkz"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.353076 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.376888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-config-data\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.377260 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-logs\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.377394 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.377506 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8gb\" (UniqueName: \"kubernetes.io/projected/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-kube-api-access-rt8gb\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.394529 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.428120 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.429548 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.466031 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.502791 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-config-data\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.503156 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-logs\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.503612 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.504107 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt8gb\" (UniqueName: \"kubernetes.io/projected/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-kube-api-access-rt8gb\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.523836 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-logs\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.526471 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.532734 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.534673 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt8gb\" (UniqueName: \"kubernetes.io/projected/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-kube-api-access-rt8gb\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.544603 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/245b0b2d-bf7c-4ac9-9fc3-f530a5cffead-config-data\") pod \"watcher-applier-0\" (UID: \"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead\") " pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.602833 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-7jj5k"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.606512 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.611076 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.611486 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.611612 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wl57\" (UniqueName: \"kubernetes.io/projected/1471c070-2a62-4080-95d8-4f60a523efaa-kube-api-access-2wl57\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.612569 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.612620 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1471c070-2a62-4080-95d8-4f60a523efaa-logs\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.625518 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-7jj5k"] Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.679260 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714515 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714561 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1471c070-2a62-4080-95d8-4f60a523efaa-logs\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714591 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-config\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714611 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714633 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714681 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714717 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/e32ebfca-afd2-4b49-a014-6246e2de8837-kube-api-access-rvnkc\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714798 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714830 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714851 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.714892 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wl57\" (UniqueName: \"kubernetes.io/projected/1471c070-2a62-4080-95d8-4f60a523efaa-kube-api-access-2wl57\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.716359 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1471c070-2a62-4080-95d8-4f60a523efaa-logs\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.719593 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.722893 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.726748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.736314 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wl57\" (UniqueName: \"kubernetes.io/projected/1471c070-2a62-4080-95d8-4f60a523efaa-kube-api-access-2wl57\") pod \"watcher-decision-engine-0\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.795294 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.820662 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.820737 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.820850 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-config\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.820888 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.820916 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.820964 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/e32ebfca-afd2-4b49-a014-6246e2de8837-kube-api-access-rvnkc\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.822244 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.822603 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.822914 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.823208 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-config\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.830025 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.856299 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/e32ebfca-afd2-4b49-a014-6246e2de8837-kube-api-access-rvnkc\") pod \"dnsmasq-dns-785d8bcb8c-7jj5k\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.861956 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:27:52 crc kubenswrapper[4688]: I0123 18:27:52.987639 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c854fbb9b-lr4lr"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.028421 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-689f6b4f86-pbwfh"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.042250 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6fttt"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.108490 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.166100 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f84479849-glxjc" event={"ID":"c4a402bb-fae6-4f62-b956-eca577195a79","Type":"ContainerStarted","Data":"10452553e627ad3a98a6ca4d955f1f3c9b427d8afde7af56ad4e6603f763a129"} Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.175646 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fttt" event={"ID":"fa85f4c3-ac71-4df0-be19-d498bad38459","Type":"ContainerStarted","Data":"4fa1c93d43bff600d54bd266d2d8f49d5f2d9c5b511b4c68c10c9aa47dde9847"} Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.195894 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558dd665cf-xhjvb" event={"ID":"abcf3d4c-7571-4b15-8b71-2ad279c56c87","Type":"ContainerStarted","Data":"f50d1e80a06eec536832077b72b853b8f2f951ab308fcd61b2907dae5d9e0569"} Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.203226 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689f6b4f86-pbwfh" event={"ID":"56f27597-f638-4b6d-84e9-3a3671c089ac","Type":"ContainerStarted","Data":"f46d8accd43341cf1fa6e085e51555f00458553da160833f15d3037cffc11581"} Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.212564 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerStarted","Data":"02f647b7d0d399af3a20071246d814ce19fddde727ee70622afd7e5a3eacf830"} Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.217252 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fdb6575c-9fggx" event={"ID":"51bf7ae1-482b-45a8-b540-8282f867b3c8","Type":"ContainerStarted","Data":"e4e12533f97b009396b78264b0386f9f0c7ebea268eacf6a4cd992fafe1c0b95"} Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.217300 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fdb6575c-9fggx" event={"ID":"51bf7ae1-482b-45a8-b540-8282f867b3c8","Type":"ContainerStarted","Data":"67eddeec582d2097fd83ccf70d7b625bb5d777a4f0a668b075319de94028c377"} Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.217613 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68fdb6575c-9fggx" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon-log" containerID="cri-o://67eddeec582d2097fd83ccf70d7b625bb5d777a4f0a668b075319de94028c377" gracePeriod=30 Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.217845 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-68fdb6575c-9fggx" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon" containerID="cri-o://e4e12533f97b009396b78264b0386f9f0c7ebea268eacf6a4cd992fafe1c0b95" gracePeriod=30 Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.260098 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68fdb6575c-9fggx" podStartSLOduration=3.471148765 podStartE2EDuration="35.25930625s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="2026-01-23 18:27:20.096420333 +0000 UTC m=+1235.092244784" lastFinishedPulling="2026-01-23 18:27:51.884577828 +0000 UTC m=+1266.880402269" observedRunningTime="2026-01-23 18:27:53.245093241 +0000 UTC m=+1268.240917702" watchObservedRunningTime="2026-01-23 18:27:53.25930625 +0000 UTC m=+1268.255130711" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.288382 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.341365 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.353078 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.355403 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.365042 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.365327 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.366022 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wjpvx" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.394592 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" path="/var/lib/kubelet/pods/37ba61eb-0e82-4af5-8756-cc56550dd6ed/volumes" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.397725 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.439676 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.440358 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.440430 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.440458 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-scripts\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.440541 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-logs\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.440679 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqr79\" (UniqueName: \"kubernetes.io/projected/575fd224-8249-4f3d-8698-3ac44f1dc581-kube-api-access-mqr79\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.440735 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-config-data\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.494501 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.516113 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.518309 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.521083 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.542633 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqr79\" (UniqueName: \"kubernetes.io/projected/575fd224-8249-4f3d-8698-3ac44f1dc581-kube-api-access-mqr79\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.542833 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-config-data\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.542992 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.543074 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.543149 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.543236 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-scripts\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.543321 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-logs\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.543495 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.543563 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.543924 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-logs\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.548999 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-scripts\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.550318 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-config-data\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.554303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.571105 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqr79\" (UniqueName: \"kubernetes.io/projected/575fd224-8249-4f3d-8698-3ac44f1dc581-kube-api-access-mqr79\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.576425 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.616643 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.646845 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b29x\" (UniqueName: \"kubernetes.io/projected/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-kube-api-access-4b29x\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.646916 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.646984 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.647011 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.647049 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.647239 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.647280 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.686941 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-7jj5k"] Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.707892 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751273 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751377 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751410 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751446 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751633 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751675 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751749 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b29x\" (UniqueName: \"kubernetes.io/projected/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-kube-api-access-4b29x\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.751812 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-logs\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.752130 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.752562 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.757560 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.757563 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.775766 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.779277 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b29x\" (UniqueName: \"kubernetes.io/projected/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-kube-api-access-4b29x\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.854374 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:27:53 crc kubenswrapper[4688]: I0123 18:27:53.922820 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-jrlkz" podUID="37ba61eb-0e82-4af5-8756-cc56550dd6ed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.153490 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.250257 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689f6b4f86-pbwfh" event={"ID":"56f27597-f638-4b6d-84e9-3a3671c089ac","Type":"ContainerStarted","Data":"d12fa7f24611de7dd9e7b6bd235978afc8176858a6593b65e1d34d8c4f3f8a9d"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.253931 4688 generic.go:334] "Generic (PLEG): container finished" podID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerID="9debd0628af5646738279c14778affab05b8ccf4b800cd9a0d9eb670ff5dee4f" exitCode=0 Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.254063 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" event={"ID":"e32ebfca-afd2-4b49-a014-6246e2de8837","Type":"ContainerDied","Data":"9debd0628af5646738279c14778affab05b8ccf4b800cd9a0d9eb670ff5dee4f"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.254101 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" event={"ID":"e32ebfca-afd2-4b49-a014-6246e2de8837","Type":"ContainerStarted","Data":"3dc39624ebeefc6e025eedaa8513744693681ace04eebf7b23f4b4515778ea2e"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.263379 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f84479849-glxjc" event={"ID":"c4a402bb-fae6-4f62-b956-eca577195a79","Type":"ContainerStarted","Data":"62ce1f46b085d1a812e5e3acd914ad43e5d2e2086f7695ec92bbe00cb3ba9c5d"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.265952 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f84479849-glxjc" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon-log" containerID="cri-o://10452553e627ad3a98a6ca4d955f1f3c9b427d8afde7af56ad4e6603f763a129" gracePeriod=30 Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.266059 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f84479849-glxjc" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon" containerID="cri-o://62ce1f46b085d1a812e5e3acd914ad43e5d2e2086f7695ec92bbe00cb3ba9c5d" gracePeriod=30 Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.268330 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558dd665cf-xhjvb" event={"ID":"abcf3d4c-7571-4b15-8b71-2ad279c56c87","Type":"ContainerStarted","Data":"ec8b8bc91a588637f13d00296fe17148bc41ebc794d46b44eacef30eeb89bdfc"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.268509 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-558dd665cf-xhjvb" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon-log" containerID="cri-o://f50d1e80a06eec536832077b72b853b8f2f951ab308fcd61b2907dae5d9e0569" gracePeriod=30 Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.268585 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-558dd665cf-xhjvb" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon" containerID="cri-o://ec8b8bc91a588637f13d00296fe17148bc41ebc794d46b44eacef30eeb89bdfc" gracePeriod=30 Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.290326 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1471c070-2a62-4080-95d8-4f60a523efaa","Type":"ContainerStarted","Data":"9255244dec4d66893047215440240046648715208803df10420601cb7746ebf6"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.298171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m28xl" event={"ID":"31e41e2a-24eb-4116-8a8a-35e34558ec71","Type":"ContainerStarted","Data":"d9ac0803562b6b8420a419dbd19913965963fed62df14784880968613cc21b36"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.338894 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f84479849-glxjc" podStartSLOduration=3.9117390260000002 podStartE2EDuration="33.33886098s" podCreationTimestamp="2026-01-23 18:27:21 +0000 UTC" firstStartedPulling="2026-01-23 18:27:22.660309338 +0000 UTC m=+1237.656133789" lastFinishedPulling="2026-01-23 18:27:52.087431302 +0000 UTC m=+1267.083255743" observedRunningTime="2026-01-23 18:27:54.309666279 +0000 UTC m=+1269.305490730" watchObservedRunningTime="2026-01-23 18:27:54.33886098 +0000 UTC m=+1269.334685421" Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.349519 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead","Type":"ContainerStarted","Data":"7d0189ac34c853ed53534139df19b2168f4b2ff5a7ca6d1182abef97b5921f2f"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.378054 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-558dd665cf-xhjvb" podStartSLOduration=4.968182361 podStartE2EDuration="36.378023538s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="2026-01-23 18:27:20.652248165 +0000 UTC m=+1235.648072606" lastFinishedPulling="2026-01-23 18:27:52.062089342 +0000 UTC m=+1267.057913783" observedRunningTime="2026-01-23 18:27:54.341535307 +0000 UTC m=+1269.337359758" watchObservedRunningTime="2026-01-23 18:27:54.378023538 +0000 UTC m=+1269.373847979" Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.391179 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerStarted","Data":"edc9f72973727b10898539eabd6253423ace5c0db70c399aa7d84e12ce7541f6"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.391260 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerStarted","Data":"f83895854bacf2798dc3dc8ac4b2a50c9ea0930b9527f30a323ef71f1d6f96e2"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.394043 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f3fccf89-b010-4ac7-8566-83b3704ef12e","Type":"ContainerStarted","Data":"4271017062f5efe1ad440674dd96ee3294ac5698541fecfc2b277c745aabfb91"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.394087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f3fccf89-b010-4ac7-8566-83b3704ef12e","Type":"ContainerStarted","Data":"b4c24ea39c74bba63efe6d737ad095d639fe161dd90c941f3a18e8223e11cb70"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.404031 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-m28xl" podStartSLOduration=3.454086021 podStartE2EDuration="36.404007867s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="2026-01-23 18:27:20.528503447 +0000 UTC m=+1235.524327888" lastFinishedPulling="2026-01-23 18:27:53.478425293 +0000 UTC m=+1268.474249734" observedRunningTime="2026-01-23 18:27:54.366408984 +0000 UTC m=+1269.362233435" watchObservedRunningTime="2026-01-23 18:27:54.404007867 +0000 UTC m=+1269.399832318" Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.417546 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fttt" event={"ID":"fa85f4c3-ac71-4df0-be19-d498bad38459","Type":"ContainerStarted","Data":"ca064dbf5bc08e7134acd87df534397b271e4dcb3e7ae009f5374fc5de39b9e5"} Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.443595 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c854fbb9b-lr4lr" podStartSLOduration=27.44333001 podStartE2EDuration="27.44333001s" podCreationTimestamp="2026-01-23 18:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:54.419341519 +0000 UTC m=+1269.415165970" watchObservedRunningTime="2026-01-23 18:27:54.44333001 +0000 UTC m=+1269.439154451" Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.450057 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6fttt" podStartSLOduration=15.450041263 podStartE2EDuration="15.450041263s" podCreationTimestamp="2026-01-23 18:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:54.448146879 +0000 UTC m=+1269.443971320" watchObservedRunningTime="2026-01-23 18:27:54.450041263 +0000 UTC m=+1269.445865704" Jan 23 18:27:54 crc kubenswrapper[4688]: I0123 18:27:54.548992 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:27:55 crc kubenswrapper[4688]: I0123 18:27:55.437648 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689f6b4f86-pbwfh" event={"ID":"56f27597-f638-4b6d-84e9-3a3671c089ac","Type":"ContainerStarted","Data":"1a81bd590df2aec524c3ec13233f98ebdf699927b59ed3118001e95b865fe0d3"} Jan 23 18:27:55 crc kubenswrapper[4688]: I0123 18:27:55.546397 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-689f6b4f86-pbwfh" podStartSLOduration=27.546364386 podStartE2EDuration="27.546364386s" podCreationTimestamp="2026-01-23 18:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:55.54268162 +0000 UTC m=+1270.538506081" watchObservedRunningTime="2026-01-23 18:27:55.546364386 +0000 UTC m=+1270.542188827" Jan 23 18:27:56 crc kubenswrapper[4688]: W0123 18:27:56.353365 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod575fd224_8249_4f3d_8698_3ac44f1dc581.slice/crio-c74a06dd93e554edf33e552f8b96d099496be0c19057fdc946f5eb93c9078da8 WatchSource:0}: Error finding container c74a06dd93e554edf33e552f8b96d099496be0c19057fdc946f5eb93c9078da8: Status 404 returned error can't find the container with id c74a06dd93e554edf33e552f8b96d099496be0c19057fdc946f5eb93c9078da8 Jan 23 18:27:56 crc kubenswrapper[4688]: I0123 18:27:56.471652 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"575fd224-8249-4f3d-8698-3ac44f1dc581","Type":"ContainerStarted","Data":"c74a06dd93e554edf33e552f8b96d099496be0c19057fdc946f5eb93c9078da8"} Jan 23 18:27:56 crc kubenswrapper[4688]: I0123 18:27:56.488447 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:27:56 crc kubenswrapper[4688]: I0123 18:27:56.580878 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:27:57 crc kubenswrapper[4688]: I0123 18:27:57.733848 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:27:58 crc kubenswrapper[4688]: I0123 18:27:58.259338 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:58 crc kubenswrapper[4688]: I0123 18:27:58.259751 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:27:58 crc kubenswrapper[4688]: I0123 18:27:58.496656 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:58 crc kubenswrapper[4688]: I0123 18:27:58.496978 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:27:58 crc kubenswrapper[4688]: I0123 18:27:58.943595 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:27:59 crc kubenswrapper[4688]: I0123 18:27:59.509383 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f3fccf89-b010-4ac7-8566-83b3704ef12e","Type":"ContainerStarted","Data":"82a9398a03cb91d3d33d9f3e3f9c37bc2915a944bbde249fb5d6d83eb649f6c6"} Jan 23 18:27:59 crc kubenswrapper[4688]: I0123 18:27:59.575566 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=7.575518327 podStartE2EDuration="7.575518327s" podCreationTimestamp="2026-01-23 18:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:27:59.533565689 +0000 UTC m=+1274.529390140" watchObservedRunningTime="2026-01-23 18:27:59.575518327 +0000 UTC m=+1274.571342768" Jan 23 18:27:59 crc kubenswrapper[4688]: I0123 18:27:59.884654 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:27:59 crc kubenswrapper[4688]: W0123 18:27:59.947028 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7222ebfd_b055_4c8f_92f1_2ce61df6fd7f.slice/crio-a07ef06ab08cf8a0a0f9b0ce5d7b384c1e82ba9bc9b972f5b0095a91d90fa6b5 WatchSource:0}: Error finding container a07ef06ab08cf8a0a0f9b0ce5d7b384c1e82ba9bc9b972f5b0095a91d90fa6b5: Status 404 returned error can't find the container with id a07ef06ab08cf8a0a0f9b0ce5d7b384c1e82ba9bc9b972f5b0095a91d90fa6b5 Jan 23 18:28:00 crc kubenswrapper[4688]: I0123 18:28:00.529607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f","Type":"ContainerStarted","Data":"a07ef06ab08cf8a0a0f9b0ce5d7b384c1e82ba9bc9b972f5b0095a91d90fa6b5"} Jan 23 18:28:00 crc kubenswrapper[4688]: I0123 18:28:00.549470 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:28:00 crc kubenswrapper[4688]: I0123 18:28:00.562048 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 18:28:00 crc kubenswrapper[4688]: I0123 18:28:00.564111 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.0347377 podStartE2EDuration="8.564081316s" podCreationTimestamp="2026-01-23 18:27:52 +0000 UTC" firstStartedPulling="2026-01-23 18:27:53.525711635 +0000 UTC m=+1268.521536076" lastFinishedPulling="2026-01-23 18:28:00.055055251 +0000 UTC m=+1275.050879692" observedRunningTime="2026-01-23 18:28:00.560507783 +0000 UTC m=+1275.556332224" watchObservedRunningTime="2026-01-23 18:28:00.564081316 +0000 UTC m=+1275.559905767" Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.599459 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"575fd224-8249-4f3d-8698-3ac44f1dc581","Type":"ContainerStarted","Data":"4a5158e37292c3bf94edd9fabe18e893e3390fe92315e283ab452698d78a62b9"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.627402 4688 generic.go:334] "Generic (PLEG): container finished" podID="fa85f4c3-ac71-4df0-be19-d498bad38459" containerID="ca064dbf5bc08e7134acd87df534397b271e4dcb3e7ae009f5374fc5de39b9e5" exitCode=0 Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.627507 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fttt" event={"ID":"fa85f4c3-ac71-4df0-be19-d498bad38459","Type":"ContainerDied","Data":"ca064dbf5bc08e7134acd87df534397b271e4dcb3e7ae009f5374fc5de39b9e5"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.637579 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f","Type":"ContainerStarted","Data":"c347f90fcb8af8861c767b89b5fc3d1a2bb893c5c6b940e1ddba4c0123aec18c"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.663160 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1471c070-2a62-4080-95d8-4f60a523efaa","Type":"ContainerStarted","Data":"f95bd7d962c5bfada63e3514a530a1139b422e0c58ae9d1e803f35f91a554f59"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.667037 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" podStartSLOduration=9.667017429 podStartE2EDuration="9.667017429s" podCreationTimestamp="2026-01-23 18:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:00.602688758 +0000 UTC m=+1275.598513229" watchObservedRunningTime="2026-01-23 18:28:01.667017429 +0000 UTC m=+1276.662841870" Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.684809 4688 generic.go:334] "Generic (PLEG): container finished" podID="31e41e2a-24eb-4116-8a8a-35e34558ec71" containerID="d9ac0803562b6b8420a419dbd19913965963fed62df14784880968613cc21b36" exitCode=0 Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.684942 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m28xl" event={"ID":"31e41e2a-24eb-4116-8a8a-35e34558ec71","Type":"ContainerDied","Data":"d9ac0803562b6b8420a419dbd19913965963fed62df14784880968613cc21b36"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.714729 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"245b0b2d-bf7c-4ac9-9fc3-f530a5cffead","Type":"ContainerStarted","Data":"4cbb9445fd38a557bd52c698eafca652495db4b7c7cdfbeb34029f2879d5f50f"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.723699 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.137732555 podStartE2EDuration="9.723678672s" podCreationTimestamp="2026-01-23 18:27:52 +0000 UTC" firstStartedPulling="2026-01-23 18:27:53.465066288 +0000 UTC m=+1268.460890729" lastFinishedPulling="2026-01-23 18:28:00.051012405 +0000 UTC m=+1275.046836846" observedRunningTime="2026-01-23 18:28:01.685413799 +0000 UTC m=+1276.681238260" watchObservedRunningTime="2026-01-23 18:28:01.723678672 +0000 UTC m=+1276.719503113" Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.732798 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" event={"ID":"e32ebfca-afd2-4b49-a014-6246e2de8837","Type":"ContainerStarted","Data":"d6d0387f0955a279f866c322009214147e4b3b6f4982865b5f9ac5fdd73411f1"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.734788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbbd26aa-7783-4958-95d0-a590f636947c","Type":"ContainerStarted","Data":"64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1"} Jan 23 18:28:01 crc kubenswrapper[4688]: I0123 18:28:01.928923 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.395556 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.395922 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.680241 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.680308 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.726451 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.758576 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"575fd224-8249-4f3d-8698-3ac44f1dc581","Type":"ContainerStarted","Data":"56f57b61387c8e56125493f7e855229460cd8d432f31a845d763e627f9802f5d"} Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.758791 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-log" containerID="cri-o://4a5158e37292c3bf94edd9fabe18e893e3390fe92315e283ab452698d78a62b9" gracePeriod=30 Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.759597 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-httpd" containerID="cri-o://56f57b61387c8e56125493f7e855229460cd8d432f31a845d763e627f9802f5d" gracePeriod=30 Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.780066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f","Type":"ContainerStarted","Data":"c479cd4f7f23a7db695c02255c7f213506c31339068aab5242795d2896a871a3"} Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.780222 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-log" containerID="cri-o://c347f90fcb8af8861c767b89b5fc3d1a2bb893c5c6b940e1ddba4c0123aec18c" gracePeriod=30 Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.780333 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.780364 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-httpd" containerID="cri-o://c479cd4f7f23a7db695c02255c7f213506c31339068aab5242795d2896a871a3" gracePeriod=30 Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.797289 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.797683 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.797658701 podStartE2EDuration="10.797658701s" podCreationTimestamp="2026-01-23 18:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:02.794233902 +0000 UTC m=+1277.790058363" watchObservedRunningTime="2026-01-23 18:28:02.797658701 +0000 UTC m=+1277.793483152" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.840610 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.840588778 podStartE2EDuration="10.840588778s" podCreationTimestamp="2026-01-23 18:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:02.829493358 +0000 UTC m=+1277.825317799" watchObservedRunningTime="2026-01-23 18:28:02.840588778 +0000 UTC m=+1277.836413219" Jan 23 18:28:02 crc kubenswrapper[4688]: I0123 18:28:02.852203 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.053528 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.440501 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.493815 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.580990 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4k6r\" (UniqueName: \"kubernetes.io/projected/fa85f4c3-ac71-4df0-be19-d498bad38459-kube-api-access-g4k6r\") pod \"fa85f4c3-ac71-4df0-be19-d498bad38459\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.581105 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-config-data\") pod \"fa85f4c3-ac71-4df0-be19-d498bad38459\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.581152 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-combined-ca-bundle\") pod \"fa85f4c3-ac71-4df0-be19-d498bad38459\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.581237 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-fernet-keys\") pod \"fa85f4c3-ac71-4df0-be19-d498bad38459\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.581356 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-scripts\") pod \"fa85f4c3-ac71-4df0-be19-d498bad38459\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.581492 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-credential-keys\") pod \"fa85f4c3-ac71-4df0-be19-d498bad38459\" (UID: \"fa85f4c3-ac71-4df0-be19-d498bad38459\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.595808 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa85f4c3-ac71-4df0-be19-d498bad38459-kube-api-access-g4k6r" (OuterVolumeSpecName: "kube-api-access-g4k6r") pod "fa85f4c3-ac71-4df0-be19-d498bad38459" (UID: "fa85f4c3-ac71-4df0-be19-d498bad38459"). InnerVolumeSpecName "kube-api-access-g4k6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.601449 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fa85f4c3-ac71-4df0-be19-d498bad38459" (UID: "fa85f4c3-ac71-4df0-be19-d498bad38459"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.603400 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fa85f4c3-ac71-4df0-be19-d498bad38459" (UID: "fa85f4c3-ac71-4df0-be19-d498bad38459"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.634742 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-scripts" (OuterVolumeSpecName: "scripts") pod "fa85f4c3-ac71-4df0-be19-d498bad38459" (UID: "fa85f4c3-ac71-4df0-be19-d498bad38459"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.714415 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.714455 4688 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.714471 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4k6r\" (UniqueName: \"kubernetes.io/projected/fa85f4c3-ac71-4df0-be19-d498bad38459-kube-api-access-g4k6r\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.714481 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.738752 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa85f4c3-ac71-4df0-be19-d498bad38459" (UID: "fa85f4c3-ac71-4df0-be19-d498bad38459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.767699 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-config-data" (OuterVolumeSpecName: "config-data") pod "fa85f4c3-ac71-4df0-be19-d498bad38459" (UID: "fa85f4c3-ac71-4df0-be19-d498bad38459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.785804 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m28xl" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.813798 4688 generic.go:334] "Generic (PLEG): container finished" podID="7226bf67-7adb-4ce2-b595-957d81002a96" containerID="356b4164f0ea8137384f762b11a26da39f79f0cbd7592fd69b395ce91bbe8925" exitCode=0 Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.813927 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9lvbq" event={"ID":"7226bf67-7adb-4ce2-b595-957d81002a96","Type":"ContainerDied","Data":"356b4164f0ea8137384f762b11a26da39f79f0cbd7592fd69b395ce91bbe8925"} Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.815897 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-combined-ca-bundle\") pod \"31e41e2a-24eb-4116-8a8a-35e34558ec71\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.815995 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-config-data\") pod \"31e41e2a-24eb-4116-8a8a-35e34558ec71\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.816097 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkb4q\" (UniqueName: \"kubernetes.io/projected/31e41e2a-24eb-4116-8a8a-35e34558ec71-kube-api-access-gkb4q\") pod \"31e41e2a-24eb-4116-8a8a-35e34558ec71\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.816268 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-scripts\") pod \"31e41e2a-24eb-4116-8a8a-35e34558ec71\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.816378 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e41e2a-24eb-4116-8a8a-35e34558ec71-logs\") pod \"31e41e2a-24eb-4116-8a8a-35e34558ec71\" (UID: \"31e41e2a-24eb-4116-8a8a-35e34558ec71\") " Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.816971 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.816992 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa85f4c3-ac71-4df0-be19-d498bad38459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.817418 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e41e2a-24eb-4116-8a8a-35e34558ec71-logs" (OuterVolumeSpecName: "logs") pod "31e41e2a-24eb-4116-8a8a-35e34558ec71" (UID: "31e41e2a-24eb-4116-8a8a-35e34558ec71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.828674 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e41e2a-24eb-4116-8a8a-35e34558ec71-kube-api-access-gkb4q" (OuterVolumeSpecName: "kube-api-access-gkb4q") pod "31e41e2a-24eb-4116-8a8a-35e34558ec71" (UID: "31e41e2a-24eb-4116-8a8a-35e34558ec71"). InnerVolumeSpecName "kube-api-access-gkb4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.848055 4688 generic.go:334] "Generic (PLEG): container finished" podID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerID="56f57b61387c8e56125493f7e855229460cd8d432f31a845d763e627f9802f5d" exitCode=143 Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.848094 4688 generic.go:334] "Generic (PLEG): container finished" podID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerID="4a5158e37292c3bf94edd9fabe18e893e3390fe92315e283ab452698d78a62b9" exitCode=143 Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.848205 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"575fd224-8249-4f3d-8698-3ac44f1dc581","Type":"ContainerDied","Data":"56f57b61387c8e56125493f7e855229460cd8d432f31a845d763e627f9802f5d"} Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.848242 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"575fd224-8249-4f3d-8698-3ac44f1dc581","Type":"ContainerDied","Data":"4a5158e37292c3bf94edd9fabe18e893e3390fe92315e283ab452698d78a62b9"} Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.848873 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-scripts" (OuterVolumeSpecName: "scripts") pod "31e41e2a-24eb-4116-8a8a-35e34558ec71" (UID: "31e41e2a-24eb-4116-8a8a-35e34558ec71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.861522 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6fttt" event={"ID":"fa85f4c3-ac71-4df0-be19-d498bad38459","Type":"ContainerDied","Data":"4fa1c93d43bff600d54bd266d2d8f49d5f2d9c5b511b4c68c10c9aa47dde9847"} Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.861565 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa1c93d43bff600d54bd266d2d8f49d5f2d9c5b511b4c68c10c9aa47dde9847" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.861632 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6fttt" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.868742 4688 generic.go:334] "Generic (PLEG): container finished" podID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerID="c479cd4f7f23a7db695c02255c7f213506c31339068aab5242795d2896a871a3" exitCode=143 Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.868781 4688 generic.go:334] "Generic (PLEG): container finished" podID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerID="c347f90fcb8af8861c767b89b5fc3d1a2bb893c5c6b940e1ddba4c0123aec18c" exitCode=143 Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.868827 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f","Type":"ContainerDied","Data":"c479cd4f7f23a7db695c02255c7f213506c31339068aab5242795d2896a871a3"} Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.868855 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f","Type":"ContainerDied","Data":"c347f90fcb8af8861c767b89b5fc3d1a2bb893c5c6b940e1ddba4c0123aec18c"} Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.871515 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-config-data" (OuterVolumeSpecName: "config-data") pod "31e41e2a-24eb-4116-8a8a-35e34558ec71" (UID: "31e41e2a-24eb-4116-8a8a-35e34558ec71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.872633 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m28xl" event={"ID":"31e41e2a-24eb-4116-8a8a-35e34558ec71","Type":"ContainerDied","Data":"47249456bc144361787ae380e05d1e0f86420861a4983967fc81848ff0a68d51"} Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.872675 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47249456bc144361787ae380e05d1e0f86420861a4983967fc81848ff0a68d51" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.873133 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.878208 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m28xl" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.888461 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-788dd47598-8wt2n"] Jan 23 18:28:03 crc kubenswrapper[4688]: E0123 18:28:03.888998 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e41e2a-24eb-4116-8a8a-35e34558ec71" containerName="placement-db-sync" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.889014 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e41e2a-24eb-4116-8a8a-35e34558ec71" containerName="placement-db-sync" Jan 23 18:28:03 crc kubenswrapper[4688]: E0123 18:28:03.889028 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f4c3-ac71-4df0-be19-d498bad38459" containerName="keystone-bootstrap" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.889035 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f4c3-ac71-4df0-be19-d498bad38459" containerName="keystone-bootstrap" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.889340 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa85f4c3-ac71-4df0-be19-d498bad38459" containerName="keystone-bootstrap" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.889419 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e41e2a-24eb-4116-8a8a-35e34558ec71" containerName="placement-db-sync" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.890826 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.894740 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.896660 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.896834 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.897089 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.897296 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.897572 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ttwkl" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.911324 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-788dd47598-8wt2n"] Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.920685 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-fernet-keys\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.920795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-public-tls-certs\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.920833 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-combined-ca-bundle\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.920888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-config-data\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.920949 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-scripts\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.921034 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fpn5\" (UniqueName: \"kubernetes.io/projected/cd02fba1-c4c0-4603-8801-92a63fa59f6a-kube-api-access-9fpn5\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.921114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-credential-keys\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.921230 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-internal-tls-certs\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.921311 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.921326 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31e41e2a-24eb-4116-8a8a-35e34558ec71-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.921339 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.921351 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkb4q\" (UniqueName: \"kubernetes.io/projected/31e41e2a-24eb-4116-8a8a-35e34558ec71-kube-api-access-gkb4q\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:03 crc kubenswrapper[4688]: I0123 18:28:03.938431 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31e41e2a-24eb-4116-8a8a-35e34558ec71" (UID: "31e41e2a-24eb-4116-8a8a-35e34558ec71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.019522 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.030942 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-fernet-keys\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.031106 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-public-tls-certs\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.031173 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-combined-ca-bundle\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.031268 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-config-data\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.031436 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-scripts\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.031580 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fpn5\" (UniqueName: \"kubernetes.io/projected/cd02fba1-c4c0-4603-8801-92a63fa59f6a-kube-api-access-9fpn5\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.031653 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-credential-keys\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.032723 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-internal-tls-certs\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.032840 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31e41e2a-24eb-4116-8a8a-35e34558ec71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.050232 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-config-data\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.052479 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-scripts\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.052608 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-internal-tls-certs\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.057092 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-credential-keys\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.057583 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-combined-ca-bundle\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.058149 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-public-tls-certs\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.059176 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cd02fba1-c4c0-4603-8801-92a63fa59f6a-fernet-keys\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.106162 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fpn5\" (UniqueName: \"kubernetes.io/projected/cd02fba1-c4c0-4603-8801-92a63fa59f6a-kube-api-access-9fpn5\") pod \"keystone-788dd47598-8wt2n\" (UID: \"cd02fba1-c4c0-4603-8801-92a63fa59f6a\") " pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.181038 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.261522 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-combined-ca-bundle\") pod \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.263079 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-logs\") pod \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.263128 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-httpd-run\") pod \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.263208 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b29x\" (UniqueName: \"kubernetes.io/projected/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-kube-api-access-4b29x\") pod \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.263274 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-config-data\") pod \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.263303 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.263445 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-scripts\") pod \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\" (UID: \"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f\") " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.265425 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" (UID: "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.265679 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-logs" (OuterVolumeSpecName: "logs") pod "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" (UID: "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.273630 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" (UID: "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.275449 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-kube-api-access-4b29x" (OuterVolumeSpecName: "kube-api-access-4b29x") pod "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" (UID: "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f"). InnerVolumeSpecName "kube-api-access-4b29x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.276746 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-scripts" (OuterVolumeSpecName: "scripts") pod "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" (UID: "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.289319 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.338422 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" (UID: "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.369947 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.370095 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.370207 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.370292 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b29x\" (UniqueName: \"kubernetes.io/projected/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-kube-api-access-4b29x\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.370423 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.370510 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.427415 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-config-data" (OuterVolumeSpecName: "config-data") pod "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" (UID: "7222ebfd-b055-4c8f-92f1-2ce61df6fd7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.431999 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.474083 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.474136 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.701062 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.881992 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-788dd47598-8wt2n"] Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.905079 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.905485 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7222ebfd-b055-4c8f-92f1-2ce61df6fd7f","Type":"ContainerDied","Data":"a07ef06ab08cf8a0a0f9b0ce5d7b384c1e82ba9bc9b972f5b0095a91d90fa6b5"} Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.905553 4688 scope.go:117] "RemoveContainer" containerID="c479cd4f7f23a7db695c02255c7f213506c31339068aab5242795d2896a871a3" Jan 23 18:28:04 crc kubenswrapper[4688]: I0123 18:28:04.995121 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.051376 4688 scope.go:117] "RemoveContainer" containerID="c347f90fcb8af8861c767b89b5fc3d1a2bb893c5c6b940e1ddba4c0123aec18c" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.054695 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.107272 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:28:05 crc kubenswrapper[4688]: E0123 18:28:05.107943 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-httpd" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.107976 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-httpd" Jan 23 18:28:05 crc kubenswrapper[4688]: E0123 18:28:05.108011 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-log" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.108023 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-log" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.108345 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-log" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.108370 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" containerName="glance-httpd" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.109847 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.119750 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.119989 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.149265 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214272 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214316 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214379 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-logs\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214413 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q79z\" (UniqueName: \"kubernetes.io/projected/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-kube-api-access-8q79z\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214461 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214491 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.214528 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.263286 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6df8898f5b-rfw5n"] Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.265725 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.281613 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.281795 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-bzgwv" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.281920 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.281949 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.282326 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316583 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316636 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-scripts\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316682 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169bb621-8517-44d2-9193-1b75492e148f-logs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-internal-tls-certs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316749 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316778 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316822 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-logs\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316843 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-config-data\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316864 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q79z\" (UniqueName: \"kubernetes.io/projected/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-kube-api-access-8q79z\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316909 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316931 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfcmt\" (UniqueName: \"kubernetes.io/projected/169bb621-8517-44d2-9193-1b75492e148f-kube-api-access-wfcmt\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316955 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.316982 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-public-tls-certs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.317015 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-combined-ca-bundle\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.317930 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.318348 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.318693 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-logs\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.329639 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.343048 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.343758 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6df8898f5b-rfw5n"] Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.343866 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.346986 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.404483 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q79z\" (UniqueName: \"kubernetes.io/projected/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-kube-api-access-8q79z\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.422839 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-scripts\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.422939 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169bb621-8517-44d2-9193-1b75492e148f-logs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.422997 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-internal-tls-certs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.423112 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-config-data\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.423173 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfcmt\" (UniqueName: \"kubernetes.io/projected/169bb621-8517-44d2-9193-1b75492e148f-kube-api-access-wfcmt\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.423232 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-public-tls-certs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.423281 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-combined-ca-bundle\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.431984 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-combined-ca-bundle\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.440060 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169bb621-8517-44d2-9193-1b75492e148f-logs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.443606 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-config-data\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.446760 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-scripts\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.455043 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-internal-tls-certs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.456341 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/169bb621-8517-44d2-9193-1b75492e148f-public-tls-certs\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.472871 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.480351 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfcmt\" (UniqueName: \"kubernetes.io/projected/169bb621-8517-44d2-9193-1b75492e148f-kube-api-access-wfcmt\") pod \"placement-6df8898f5b-rfw5n\" (UID: \"169bb621-8517-44d2-9193-1b75492e148f\") " pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.729791 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7222ebfd-b055-4c8f-92f1-2ce61df6fd7f" path="/var/lib/kubelet/pods/7222ebfd-b055-4c8f-92f1-2ce61df6fd7f/volumes" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.787483 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.825287 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.839845 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.939052 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59nw\" (UniqueName: \"kubernetes.io/projected/7226bf67-7adb-4ce2-b595-957d81002a96-kube-api-access-k59nw\") pod \"7226bf67-7adb-4ce2-b595-957d81002a96\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.939143 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-config\") pod \"7226bf67-7adb-4ce2-b595-957d81002a96\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.939230 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-combined-ca-bundle\") pod \"7226bf67-7adb-4ce2-b595-957d81002a96\" (UID: \"7226bf67-7adb-4ce2-b595-957d81002a96\") " Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.940198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-788dd47598-8wt2n" event={"ID":"cd02fba1-c4c0-4603-8801-92a63fa59f6a","Type":"ContainerStarted","Data":"dc28069ebdff95d5f75eb6a7fbd35329798a24ff7aa53b6a7067c324baadeed9"} Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.944494 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7226bf67-7adb-4ce2-b595-957d81002a96-kube-api-access-k59nw" (OuterVolumeSpecName: "kube-api-access-k59nw") pod "7226bf67-7adb-4ce2-b595-957d81002a96" (UID: "7226bf67-7adb-4ce2-b595-957d81002a96"). InnerVolumeSpecName "kube-api-access-k59nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.950267 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9lvbq" event={"ID":"7226bf67-7adb-4ce2-b595-957d81002a96","Type":"ContainerDied","Data":"ead71136a3f370d7c9ba359dfccc3aef95ee7e1a324b09e24957b2645b1fc4f7"} Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.950324 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead71136a3f370d7c9ba359dfccc3aef95ee7e1a324b09e24957b2645b1fc4f7" Jan 23 18:28:05 crc kubenswrapper[4688]: I0123 18:28:05.950408 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9lvbq" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.007361 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-config" (OuterVolumeSpecName: "config") pod "7226bf67-7adb-4ce2-b595-957d81002a96" (UID: "7226bf67-7adb-4ce2-b595-957d81002a96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.034593 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7226bf67-7adb-4ce2-b595-957d81002a96" (UID: "7226bf67-7adb-4ce2-b595-957d81002a96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.050745 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k59nw\" (UniqueName: \"kubernetes.io/projected/7226bf67-7adb-4ce2-b595-957d81002a96-kube-api-access-k59nw\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.050786 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.050795 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7226bf67-7adb-4ce2-b595-957d81002a96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:06 crc kubenswrapper[4688]: E0123 18:28:06.545563 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7226bf67_7adb_4ce2_b595_957d81002a96.slice/crio-ead71136a3f370d7c9ba359dfccc3aef95ee7e1a324b09e24957b2645b1fc4f7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7226bf67_7adb_4ce2_b595_957d81002a96.slice\": RecentStats: unable to find data in memory cache]" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.897755 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.940122 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.949575 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6df8898f5b-rfw5n"] Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.965653 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.965718 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.965773 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.966746 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c61421e0532a5bce13261538943da0f43d79b47405f6be50cfb642634fbe028e"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.966809 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://c61421e0532a5bce13261538943da0f43d79b47405f6be50cfb642634fbe028e" gracePeriod=600 Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.989161 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-7jj5k"] Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.989620 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerName="dnsmasq-dns" containerID="cri-o://d6d0387f0955a279f866c322009214147e4b3b6f4982865b5f9ac5fdd73411f1" gracePeriod=10 Jan 23 18:28:06 crc kubenswrapper[4688]: I0123 18:28:06.997176 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.009542 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqr79\" (UniqueName: \"kubernetes.io/projected/575fd224-8249-4f3d-8698-3ac44f1dc581-kube-api-access-mqr79\") pod \"575fd224-8249-4f3d-8698-3ac44f1dc581\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.009627 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"575fd224-8249-4f3d-8698-3ac44f1dc581\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.009754 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-httpd-run\") pod \"575fd224-8249-4f3d-8698-3ac44f1dc581\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.009791 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-config-data\") pod \"575fd224-8249-4f3d-8698-3ac44f1dc581\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.009819 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-combined-ca-bundle\") pod \"575fd224-8249-4f3d-8698-3ac44f1dc581\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.009968 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-logs\") pod \"575fd224-8249-4f3d-8698-3ac44f1dc581\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.010032 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-scripts\") pod \"575fd224-8249-4f3d-8698-3ac44f1dc581\" (UID: \"575fd224-8249-4f3d-8698-3ac44f1dc581\") " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.014847 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "575fd224-8249-4f3d-8698-3ac44f1dc581" (UID: "575fd224-8249-4f3d-8698-3ac44f1dc581"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.017668 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "575fd224-8249-4f3d-8698-3ac44f1dc581" (UID: "575fd224-8249-4f3d-8698-3ac44f1dc581"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.018061 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-logs" (OuterVolumeSpecName: "logs") pod "575fd224-8249-4f3d-8698-3ac44f1dc581" (UID: "575fd224-8249-4f3d-8698-3ac44f1dc581"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.023246 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-scripts" (OuterVolumeSpecName: "scripts") pod "575fd224-8249-4f3d-8698-3ac44f1dc581" (UID: "575fd224-8249-4f3d-8698-3ac44f1dc581"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.052232 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.052489 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"575fd224-8249-4f3d-8698-3ac44f1dc581","Type":"ContainerDied","Data":"c74a06dd93e554edf33e552f8b96d099496be0c19057fdc946f5eb93c9078da8"} Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.052618 4688 scope.go:117] "RemoveContainer" containerID="56f57b61387c8e56125493f7e855229460cd8d432f31a845d763e627f9802f5d" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.052695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/575fd224-8249-4f3d-8698-3ac44f1dc581-kube-api-access-mqr79" (OuterVolumeSpecName: "kube-api-access-mqr79") pod "575fd224-8249-4f3d-8698-3ac44f1dc581" (UID: "575fd224-8249-4f3d-8698-3ac44f1dc581"). InnerVolumeSpecName "kube-api-access-mqr79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.078249 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bk5ht"] Jan 23 18:28:07 crc kubenswrapper[4688]: E0123 18:28:07.078871 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7226bf67-7adb-4ce2-b595-957d81002a96" containerName="neutron-db-sync" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.078885 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7226bf67-7adb-4ce2-b595-957d81002a96" containerName="neutron-db-sync" Jan 23 18:28:07 crc kubenswrapper[4688]: E0123 18:28:07.078900 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-log" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.078907 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-log" Jan 23 18:28:07 crc kubenswrapper[4688]: E0123 18:28:07.078924 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-httpd" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.078931 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-httpd" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.079174 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-log" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.079247 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7226bf67-7adb-4ce2-b595-957d81002a96" containerName="neutron-db-sync" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.079260 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" containerName="glance-httpd" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.089780 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.098391 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6df8898f5b-rfw5n" event={"ID":"169bb621-8517-44d2-9193-1b75492e148f","Type":"ContainerStarted","Data":"93f70881d4fbeb62d13e4535e77979a9632b6ca2257f085614f32dc29d7fb0df"} Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.115835 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.115894 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-config\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.115975 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np2cb\" (UniqueName: \"kubernetes.io/projected/98927a20-b6a0-4442-8168-dfafa76fce14-kube-api-access-np2cb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116079 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116100 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116218 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqr79\" (UniqueName: \"kubernetes.io/projected/575fd224-8249-4f3d-8698-3ac44f1dc581-kube-api-access-mqr79\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116241 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116251 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116259 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/575fd224-8249-4f3d-8698-3ac44f1dc581-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.116269 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.126549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6944ad2d-9b21-468f-aaaf-66adbbe5dc23","Type":"ContainerStarted","Data":"d10db3724dee6951326de587d7b9d25f55c97a9d501a4665c79d5413a962b372"} Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.128591 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bk5ht"] Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.153466 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74745fc86b-bp676"] Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.155808 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.155970 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "575fd224-8249-4f3d-8698-3ac44f1dc581" (UID: "575fd224-8249-4f3d-8698-3ac44f1dc581"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.166315 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-config-data" (OuterVolumeSpecName: "config-data") pod "575fd224-8249-4f3d-8698-3ac44f1dc581" (UID: "575fd224-8249-4f3d-8698-3ac44f1dc581"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.169662 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.169999 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.170238 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.170306 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.170380 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f44g6" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.185556 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74745fc86b-bp676"] Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.219689 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.219750 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.219805 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-httpd-config\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.219841 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.219874 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.219931 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-config\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.220008 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-combined-ca-bundle\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.220046 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-ovndb-tls-certs\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.220099 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np2cb\" (UniqueName: \"kubernetes.io/projected/98927a20-b6a0-4442-8168-dfafa76fce14-kube-api-access-np2cb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.220197 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m5tz\" (UniqueName: \"kubernetes.io/projected/424368f6-fce1-4e7d-b400-9554ec6a4fd3-kube-api-access-9m5tz\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.220249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-config\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.223031 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.225268 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-svc\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.225947 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.226608 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.226651 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.226669 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/575fd224-8249-4f3d-8698-3ac44f1dc581-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.226686 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.227173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-config\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.264933 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np2cb\" (UniqueName: \"kubernetes.io/projected/98927a20-b6a0-4442-8168-dfafa76fce14-kube-api-access-np2cb\") pod \"dnsmasq-dns-55f844cf75-bk5ht\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.333603 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.335283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-combined-ca-bundle\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.335361 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-ovndb-tls-certs\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.335547 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m5tz\" (UniqueName: \"kubernetes.io/projected/424368f6-fce1-4e7d-b400-9554ec6a4fd3-kube-api-access-9m5tz\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.335612 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-config\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.335733 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-httpd-config\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.357329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-httpd-config\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.357816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-config\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.363938 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-ovndb-tls-certs\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.364076 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m5tz\" (UniqueName: \"kubernetes.io/projected/424368f6-fce1-4e7d-b400-9554ec6a4fd3-kube-api-access-9m5tz\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.375544 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-combined-ca-bundle\") pod \"neutron-74745fc86b-bp676\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.441026 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.442793 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.501733 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.525670 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.527484 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.530362 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.530656 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.547170 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.652923 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.653032 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.653065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.653125 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-logs\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.653168 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.653345 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.653399 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.653432 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j6bw\" (UniqueName: \"kubernetes.io/projected/8a335d28-2e6a-428b-8eb6-9a91c8150833-kube-api-access-4j6bw\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.666861 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.761640 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.761700 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.761777 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-logs\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.761825 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.761852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.761917 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.761973 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j6bw\" (UniqueName: \"kubernetes.io/projected/8a335d28-2e6a-428b-8eb6-9a91c8150833-kube-api-access-4j6bw\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.762023 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.763565 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.764127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-logs\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.764608 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.769451 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-config-data\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.769865 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.776868 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.790841 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-scripts\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.796133 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j6bw\" (UniqueName: \"kubernetes.io/projected/8a335d28-2e6a-428b-8eb6-9a91c8150833-kube-api-access-4j6bw\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.817541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " pod="openstack/glance-default-external-api-0" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.863594 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: connect: connection refused" Jan 23 18:28:07 crc kubenswrapper[4688]: I0123 18:28:07.878079 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:28:08 crc kubenswrapper[4688]: I0123 18:28:08.261902 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:28:08 crc kubenswrapper[4688]: I0123 18:28:08.498287 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689f6b4f86-pbwfh" podUID="56f27597-f638-4b6d-84e9-3a3671c089ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 23 18:28:08 crc kubenswrapper[4688]: I0123 18:28:08.708002 4688 scope.go:117] "RemoveContainer" containerID="4a5158e37292c3bf94edd9fabe18e893e3390fe92315e283ab452698d78a62b9" Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.242892 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-788dd47598-8wt2n" event={"ID":"cd02fba1-c4c0-4603-8801-92a63fa59f6a","Type":"ContainerStarted","Data":"8042805dd6f6737dd218572c7051a3546d795f1b19af06fad441662a904ee97e"} Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.243479 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.253801 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="c61421e0532a5bce13261538943da0f43d79b47405f6be50cfb642634fbe028e" exitCode=0 Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.253871 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"c61421e0532a5bce13261538943da0f43d79b47405f6be50cfb642634fbe028e"} Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.253920 4688 scope.go:117] "RemoveContainer" containerID="ce2ee85d69f22a706875c0452ba1efb42e44916bb5588111fe1426c3ed55e5f2" Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.268086 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-788dd47598-8wt2n" podStartSLOduration=6.26806385 podStartE2EDuration="6.26806385s" podCreationTimestamp="2026-01-23 18:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:09.26146082 +0000 UTC m=+1284.257285261" watchObservedRunningTime="2026-01-23 18:28:09.26806385 +0000 UTC m=+1284.263888301" Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.299017 4688 generic.go:334] "Generic (PLEG): container finished" podID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerID="d6d0387f0955a279f866c322009214147e4b3b6f4982865b5f9ac5fdd73411f1" exitCode=0 Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.299076 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" event={"ID":"e32ebfca-afd2-4b49-a014-6246e2de8837","Type":"ContainerDied","Data":"d6d0387f0955a279f866c322009214147e4b3b6f4982865b5f9ac5fdd73411f1"} Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.376084 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="575fd224-8249-4f3d-8698-3ac44f1dc581" path="/var/lib/kubelet/pods/575fd224-8249-4f3d-8698-3ac44f1dc581/volumes" Jan 23 18:28:09 crc kubenswrapper[4688]: I0123 18:28:09.930754 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74745fc86b-bp676"] Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.050795 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:28:10 crc kubenswrapper[4688]: W0123 18:28:10.067099 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod424368f6_fce1_4e7d_b400_9554ec6a4fd3.slice/crio-c7d11a33a01ff52b5065b7262cd101b8929e668a4fb99474d4dbb76f30a152b6 WatchSource:0}: Error finding container c7d11a33a01ff52b5065b7262cd101b8929e668a4fb99474d4dbb76f30a152b6: Status 404 returned error can't find the container with id c7d11a33a01ff52b5065b7262cd101b8929e668a4fb99474d4dbb76f30a152b6 Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.351167 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.379625 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a335d28-2e6a-428b-8eb6-9a91c8150833","Type":"ContainerStarted","Data":"685d3abefe9b67fe30a8d3144b4fa757904bb44be394192eeed536a28e33d894"} Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.393228 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6944ad2d-9b21-468f-aaaf-66adbbe5dc23","Type":"ContainerStarted","Data":"cdc0e255c1dddc4d207fda1f9985a2821c8145c1d47a84ebdda4f60f19e032a2"} Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.395382 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bk5ht"] Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.402171 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"efff7e73d0e1ac0534ebe075a3a122ddc634e7b49a03f861c06609aa4fb7858e"} Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.425519 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" event={"ID":"e32ebfca-afd2-4b49-a014-6246e2de8837","Type":"ContainerDied","Data":"3dc39624ebeefc6e025eedaa8513744693681ace04eebf7b23f4b4515778ea2e"} Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.425592 4688 scope.go:117] "RemoveContainer" containerID="d6d0387f0955a279f866c322009214147e4b3b6f4982865b5f9ac5fdd73411f1" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.425796 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-7jj5k" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.439539 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74745fc86b-bp676" event={"ID":"424368f6-fce1-4e7d-b400-9554ec6a4fd3","Type":"ContainerStarted","Data":"c7d11a33a01ff52b5065b7262cd101b8929e668a4fb99474d4dbb76f30a152b6"} Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.441039 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b698d98c-7kjns"] Jan 23 18:28:10 crc kubenswrapper[4688]: E0123 18:28:10.448375 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerName="dnsmasq-dns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.448408 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerName="dnsmasq-dns" Jan 23 18:28:10 crc kubenswrapper[4688]: E0123 18:28:10.448429 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerName="init" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.448437 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerName="init" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.448772 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" containerName="dnsmasq-dns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.450898 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.452814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6df8898f5b-rfw5n" event={"ID":"169bb621-8517-44d2-9193-1b75492e148f","Type":"ContainerStarted","Data":"9e0302038fbee4b080d943b6720f0e6ed5c4f585cf43b6d93cf5f393b1e13908"} Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.453510 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.457720 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.492857 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-sb\") pod \"e32ebfca-afd2-4b49-a014-6246e2de8837\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.492910 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-nb\") pod \"e32ebfca-afd2-4b49-a014-6246e2de8837\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.493045 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-config\") pod \"e32ebfca-afd2-4b49-a014-6246e2de8837\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.493075 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-svc\") pod \"e32ebfca-afd2-4b49-a014-6246e2de8837\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.493159 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-swift-storage-0\") pod \"e32ebfca-afd2-4b49-a014-6246e2de8837\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.493221 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/e32ebfca-afd2-4b49-a014-6246e2de8837-kube-api-access-rvnkc\") pod \"e32ebfca-afd2-4b49-a014-6246e2de8837\" (UID: \"e32ebfca-afd2-4b49-a014-6246e2de8837\") " Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.596859 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-httpd-config\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.597175 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-ovndb-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.597417 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-internal-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.597468 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-combined-ca-bundle\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.597598 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-config\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.597630 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-public-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.597686 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5xsl\" (UniqueName: \"kubernetes.io/projected/158df6c9-791b-411c-9405-74bf8eaa2995-kube-api-access-m5xsl\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.621698 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32ebfca-afd2-4b49-a014-6246e2de8837-kube-api-access-rvnkc" (OuterVolumeSpecName: "kube-api-access-rvnkc") pod "e32ebfca-afd2-4b49-a014-6246e2de8837" (UID: "e32ebfca-afd2-4b49-a014-6246e2de8837"). InnerVolumeSpecName "kube-api-access-rvnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.629647 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b698d98c-7kjns"] Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.669582 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e32ebfca-afd2-4b49-a014-6246e2de8837" (UID: "e32ebfca-afd2-4b49-a014-6246e2de8837"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.704824 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-config\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.704875 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-public-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.704917 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5xsl\" (UniqueName: \"kubernetes.io/projected/158df6c9-791b-411c-9405-74bf8eaa2995-kube-api-access-m5xsl\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.704972 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-httpd-config\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.705000 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-ovndb-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.705131 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-internal-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.705179 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-combined-ca-bundle\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.705861 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.706422 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e32ebfca-afd2-4b49-a014-6246e2de8837" (UID: "e32ebfca-afd2-4b49-a014-6246e2de8837"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.717475 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvnkc\" (UniqueName: \"kubernetes.io/projected/e32ebfca-afd2-4b49-a014-6246e2de8837-kube-api-access-rvnkc\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.751570 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-combined-ca-bundle\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.759378 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-config\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.773056 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-internal-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.780430 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-public-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.800009 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5xsl\" (UniqueName: \"kubernetes.io/projected/158df6c9-791b-411c-9405-74bf8eaa2995-kube-api-access-m5xsl\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.802517 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-httpd-config\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.815254 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/158df6c9-791b-411c-9405-74bf8eaa2995-ovndb-tls-certs\") pod \"neutron-5b698d98c-7kjns\" (UID: \"158df6c9-791b-411c-9405-74bf8eaa2995\") " pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.826709 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.841748 4688 scope.go:117] "RemoveContainer" containerID="9debd0628af5646738279c14778affab05b8ccf4b800cd9a0d9eb670ff5dee4f" Jan 23 18:28:10 crc kubenswrapper[4688]: I0123 18:28:10.968896 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-config" (OuterVolumeSpecName: "config") pod "e32ebfca-afd2-4b49-a014-6246e2de8837" (UID: "e32ebfca-afd2-4b49-a014-6246e2de8837"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.021138 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.032093 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.346283 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e32ebfca-afd2-4b49-a014-6246e2de8837" (UID: "e32ebfca-afd2-4b49-a014-6246e2de8837"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.430684 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e32ebfca-afd2-4b49-a014-6246e2de8837" (UID: "e32ebfca-afd2-4b49-a014-6246e2de8837"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.447112 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.447397 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e32ebfca-afd2-4b49-a014-6246e2de8837-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.526585 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmgh7" event={"ID":"fc227102-c953-4a8b-bfc2-918b63e457c1","Type":"ContainerStarted","Data":"bc7a92edcac4ed02f5d24ee15d4472bf7251e23f1eff22778b985381b2f8da96"} Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.593511 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" event={"ID":"98927a20-b6a0-4442-8168-dfafa76fce14","Type":"ContainerStarted","Data":"10fc11df646d6fad8f75c6a20ef0caf9966f04f394595603717d4232ad4b8ff5"} Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.685848 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xmgh7" podStartSLOduration=4.435704119 podStartE2EDuration="53.685819411s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="2026-01-23 18:27:20.34293994 +0000 UTC m=+1235.338764381" lastFinishedPulling="2026-01-23 18:28:09.593055232 +0000 UTC m=+1284.588879673" observedRunningTime="2026-01-23 18:28:11.558125572 +0000 UTC m=+1286.553950033" watchObservedRunningTime="2026-01-23 18:28:11.685819411 +0000 UTC m=+1286.681643852" Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.806842 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-7jj5k"] Jan 23 18:28:11 crc kubenswrapper[4688]: I0123 18:28:11.836176 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-7jj5k"] Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.248373 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b698d98c-7kjns"] Jan 23 18:28:12 crc kubenswrapper[4688]: W0123 18:28:12.277832 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod158df6c9_791b_411c_9405_74bf8eaa2995.slice/crio-dbec062ebf7320a4c4cdd2637178ce15cddd2d33f63d49b73b8986a9fe5f5b96 WatchSource:0}: Error finding container dbec062ebf7320a4c4cdd2637178ce15cddd2d33f63d49b73b8986a9fe5f5b96: Status 404 returned error can't find the container with id dbec062ebf7320a4c4cdd2637178ce15cddd2d33f63d49b73b8986a9fe5f5b96 Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.404626 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.449588 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.868994 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74745fc86b-bp676" event={"ID":"424368f6-fce1-4e7d-b400-9554ec6a4fd3","Type":"ContainerStarted","Data":"42281437f42a8d5076828069066a7ffe9c922cf065ae269f8b8dc978c1065d51"} Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.890909 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b698d98c-7kjns" event={"ID":"158df6c9-791b-411c-9405-74bf8eaa2995","Type":"ContainerStarted","Data":"dbec062ebf7320a4c4cdd2637178ce15cddd2d33f63d49b73b8986a9fe5f5b96"} Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.932697 4688 generic.go:334] "Generic (PLEG): container finished" podID="98927a20-b6a0-4442-8168-dfafa76fce14" containerID="f7f2c9ad5ea27d4f0ea9b96de111d6a883d1334fc7962bc013dae52841cfd8a8" exitCode=0 Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.932773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" event={"ID":"98927a20-b6a0-4442-8168-dfafa76fce14","Type":"ContainerDied","Data":"f7f2c9ad5ea27d4f0ea9b96de111d6a883d1334fc7962bc013dae52841cfd8a8"} Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.941585 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6df8898f5b-rfw5n" event={"ID":"169bb621-8517-44d2-9193-1b75492e148f","Type":"ContainerStarted","Data":"a13e03ef295f748989652ad9fadcf086944bb56bf3298c7ca4e0a5c97912bfe1"} Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.942840 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.942880 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.953678 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a335d28-2e6a-428b-8eb6-9a91c8150833","Type":"ContainerStarted","Data":"c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e"} Jan 23 18:28:12 crc kubenswrapper[4688]: I0123 18:28:12.993616 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6df8898f5b-rfw5n" podStartSLOduration=7.993600035 podStartE2EDuration="7.993600035s" podCreationTimestamp="2026-01-23 18:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:12.989408244 +0000 UTC m=+1287.985232685" watchObservedRunningTime="2026-01-23 18:28:12.993600035 +0000 UTC m=+1287.989424476" Jan 23 18:28:13 crc kubenswrapper[4688]: I0123 18:28:13.373473 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e32ebfca-afd2-4b49-a014-6246e2de8837" path="/var/lib/kubelet/pods/e32ebfca-afd2-4b49-a014-6246e2de8837/volumes" Jan 23 18:28:13 crc kubenswrapper[4688]: I0123 18:28:13.992554 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vsp8t" event={"ID":"b8d25eb5-0041-42b6-8b61-ad9e728c3049","Type":"ContainerStarted","Data":"981ba849cc6952d6d50d67b9dd1872de9bbbc764ac40c171480863f34d78f347"} Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.007496 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" event={"ID":"98927a20-b6a0-4442-8168-dfafa76fce14","Type":"ContainerStarted","Data":"0aa312577198ac4cb613f9a993b263d15bc23e3c11e71197e5345ece124f03e1"} Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.007588 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.013762 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vsp8t" podStartSLOduration=6.354715926 podStartE2EDuration="56.013730872s" podCreationTimestamp="2026-01-23 18:27:18 +0000 UTC" firstStartedPulling="2026-01-23 18:27:20.116143723 +0000 UTC m=+1235.111968164" lastFinishedPulling="2026-01-23 18:28:09.775158669 +0000 UTC m=+1284.770983110" observedRunningTime="2026-01-23 18:28:14.011421896 +0000 UTC m=+1289.007246337" watchObservedRunningTime="2026-01-23 18:28:14.013730872 +0000 UTC m=+1289.009555313" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.028447 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a335d28-2e6a-428b-8eb6-9a91c8150833","Type":"ContainerStarted","Data":"1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188"} Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.039033 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6944ad2d-9b21-468f-aaaf-66adbbe5dc23","Type":"ContainerStarted","Data":"34b71fd80089ea0d8c7559b2e1f370c029654e884dd53ee14302b5de033a4ba8"} Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.047424 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" podStartSLOduration=8.047398022 podStartE2EDuration="8.047398022s" podCreationTimestamp="2026-01-23 18:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:14.033736179 +0000 UTC m=+1289.029560620" watchObservedRunningTime="2026-01-23 18:28:14.047398022 +0000 UTC m=+1289.043222473" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.063662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74745fc86b-bp676" event={"ID":"424368f6-fce1-4e7d-b400-9554ec6a4fd3","Type":"ContainerStarted","Data":"e96ae46b2dd0b02798e0cca30a38caf8646f652cde0f2db376c5531fee3545a4"} Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.063843 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.063824166 podStartE2EDuration="7.063824166s" podCreationTimestamp="2026-01-23 18:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:14.0556461 +0000 UTC m=+1289.051470541" watchObservedRunningTime="2026-01-23 18:28:14.063824166 +0000 UTC m=+1289.059648597" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.064865 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.085230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b698d98c-7kjns" event={"ID":"158df6c9-791b-411c-9405-74bf8eaa2995","Type":"ContainerStarted","Data":"7de69633baefb6f4c30a8abc235f328ce0abb532be0bf7ce9dc5c456e39b3413"} Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.085283 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b698d98c-7kjns" event={"ID":"158df6c9-791b-411c-9405-74bf8eaa2995","Type":"ContainerStarted","Data":"9b6ca087958aa89dff15d0832a7ad7f3996bd5b016d922d5a082c340a3390a59"} Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.086296 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.093854 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.09382917 podStartE2EDuration="9.09382917s" podCreationTimestamp="2026-01-23 18:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:14.090337939 +0000 UTC m=+1289.086162390" watchObservedRunningTime="2026-01-23 18:28:14.09382917 +0000 UTC m=+1289.089653621" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.166120 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b698d98c-7kjns" podStartSLOduration=4.166095472 podStartE2EDuration="4.166095472s" podCreationTimestamp="2026-01-23 18:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:14.157264487 +0000 UTC m=+1289.153088928" watchObservedRunningTime="2026-01-23 18:28:14.166095472 +0000 UTC m=+1289.161919923" Jan 23 18:28:14 crc kubenswrapper[4688]: I0123 18:28:14.185696 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74745fc86b-bp676" podStartSLOduration=8.185670386 podStartE2EDuration="8.185670386s" podCreationTimestamp="2026-01-23 18:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:14.119677935 +0000 UTC m=+1289.115502376" watchObservedRunningTime="2026-01-23 18:28:14.185670386 +0000 UTC m=+1289.181494827" Jan 23 18:28:15 crc kubenswrapper[4688]: I0123 18:28:15.787309 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:28:15 crc kubenswrapper[4688]: I0123 18:28:15.789254 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:15 crc kubenswrapper[4688]: I0123 18:28:15.789273 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:15 crc kubenswrapper[4688]: I0123 18:28:15.789504 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api-log" containerID="cri-o://4271017062f5efe1ad440674dd96ee3294ac5698541fecfc2b277c745aabfb91" gracePeriod=30 Jan 23 18:28:15 crc kubenswrapper[4688]: I0123 18:28:15.790034 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api" containerID="cri-o://82a9398a03cb91d3d33d9f3e3f9c37bc2915a944bbde249fb5d6d83eb649f6c6" gracePeriod=30 Jan 23 18:28:15 crc kubenswrapper[4688]: I0123 18:28:15.882462 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:15 crc kubenswrapper[4688]: I0123 18:28:15.901438 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:16 crc kubenswrapper[4688]: I0123 18:28:16.116152 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc227102-c953-4a8b-bfc2-918b63e457c1" containerID="bc7a92edcac4ed02f5d24ee15d4472bf7251e23f1eff22778b985381b2f8da96" exitCode=0 Jan 23 18:28:16 crc kubenswrapper[4688]: I0123 18:28:16.116245 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmgh7" event={"ID":"fc227102-c953-4a8b-bfc2-918b63e457c1","Type":"ContainerDied","Data":"bc7a92edcac4ed02f5d24ee15d4472bf7251e23f1eff22778b985381b2f8da96"} Jan 23 18:28:16 crc kubenswrapper[4688]: I0123 18:28:16.131486 4688 generic.go:334] "Generic (PLEG): container finished" podID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerID="4271017062f5efe1ad440674dd96ee3294ac5698541fecfc2b277c745aabfb91" exitCode=143 Jan 23 18:28:16 crc kubenswrapper[4688]: I0123 18:28:16.133042 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f3fccf89-b010-4ac7-8566-83b3704ef12e","Type":"ContainerDied","Data":"4271017062f5efe1ad440674dd96ee3294ac5698541fecfc2b277c745aabfb91"} Jan 23 18:28:16 crc kubenswrapper[4688]: I0123 18:28:16.134083 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:16 crc kubenswrapper[4688]: I0123 18:28:16.134108 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:16 crc kubenswrapper[4688]: I0123 18:28:16.774835 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:17 crc kubenswrapper[4688]: I0123 18:28:17.879361 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 18:28:17 crc kubenswrapper[4688]: I0123 18:28:17.879699 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 18:28:17 crc kubenswrapper[4688]: I0123 18:28:17.915707 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 18:28:17 crc kubenswrapper[4688]: I0123 18:28:17.939841 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 18:28:18 crc kubenswrapper[4688]: I0123 18:28:18.156372 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 18:28:18 crc kubenswrapper[4688]: I0123 18:28:18.156741 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 18:28:18 crc kubenswrapper[4688]: I0123 18:28:18.258791 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:28:18 crc kubenswrapper[4688]: I0123 18:28:18.486225 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6df8898f5b-rfw5n" Jan 23 18:28:18 crc kubenswrapper[4688]: I0123 18:28:18.496509 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689f6b4f86-pbwfh" podUID="56f27597-f638-4b6d-84e9-3a3671c089ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 23 18:28:19 crc kubenswrapper[4688]: I0123 18:28:19.212239 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": read tcp 10.217.0.2:60254->10.217.0.158:9322: read: connection reset by peer" Jan 23 18:28:19 crc kubenswrapper[4688]: I0123 18:28:19.212248 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": read tcp 10.217.0.2:60252->10.217.0.158:9322: read: connection reset by peer" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.199246 4688 generic.go:334] "Generic (PLEG): container finished" podID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerID="82a9398a03cb91d3d33d9f3e3f9c37bc2915a944bbde249fb5d6d83eb649f6c6" exitCode=0 Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.199575 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f3fccf89-b010-4ac7-8566-83b3704ef12e","Type":"ContainerDied","Data":"82a9398a03cb91d3d33d9f3e3f9c37bc2915a944bbde249fb5d6d83eb649f6c6"} Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.225453 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xmgh7" event={"ID":"fc227102-c953-4a8b-bfc2-918b63e457c1","Type":"ContainerDied","Data":"1b84cde335a72c60effc29aabcf36d0b317bd63a34b498c615cb9403cec1f65c"} Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.225517 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b84cde335a72c60effc29aabcf36d0b317bd63a34b498c615cb9403cec1f65c" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.240711 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.275768 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-db-sync-config-data\") pod \"fc227102-c953-4a8b-bfc2-918b63e457c1\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.275903 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb2bv\" (UniqueName: \"kubernetes.io/projected/fc227102-c953-4a8b-bfc2-918b63e457c1-kube-api-access-vb2bv\") pod \"fc227102-c953-4a8b-bfc2-918b63e457c1\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.275933 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-combined-ca-bundle\") pod \"fc227102-c953-4a8b-bfc2-918b63e457c1\" (UID: \"fc227102-c953-4a8b-bfc2-918b63e457c1\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.284671 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fc227102-c953-4a8b-bfc2-918b63e457c1" (UID: "fc227102-c953-4a8b-bfc2-918b63e457c1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.291400 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc227102-c953-4a8b-bfc2-918b63e457c1-kube-api-access-vb2bv" (OuterVolumeSpecName: "kube-api-access-vb2bv") pod "fc227102-c953-4a8b-bfc2-918b63e457c1" (UID: "fc227102-c953-4a8b-bfc2-918b63e457c1"). InnerVolumeSpecName "kube-api-access-vb2bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.356365 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc227102-c953-4a8b-bfc2-918b63e457c1" (UID: "fc227102-c953-4a8b-bfc2-918b63e457c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.378632 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.378678 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb2bv\" (UniqueName: \"kubernetes.io/projected/fc227102-c953-4a8b-bfc2-918b63e457c1-kube-api-access-vb2bv\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.378693 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc227102-c953-4a8b-bfc2-918b63e457c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.427000 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.479350 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-combined-ca-bundle\") pod \"f3fccf89-b010-4ac7-8566-83b3704ef12e\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.479710 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f69lx\" (UniqueName: \"kubernetes.io/projected/f3fccf89-b010-4ac7-8566-83b3704ef12e-kube-api-access-f69lx\") pod \"f3fccf89-b010-4ac7-8566-83b3704ef12e\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.480234 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-config-data\") pod \"f3fccf89-b010-4ac7-8566-83b3704ef12e\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.480286 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-custom-prometheus-ca\") pod \"f3fccf89-b010-4ac7-8566-83b3704ef12e\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.480302 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3fccf89-b010-4ac7-8566-83b3704ef12e-logs\") pod \"f3fccf89-b010-4ac7-8566-83b3704ef12e\" (UID: \"f3fccf89-b010-4ac7-8566-83b3704ef12e\") " Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.481649 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3fccf89-b010-4ac7-8566-83b3704ef12e-logs" (OuterVolumeSpecName: "logs") pod "f3fccf89-b010-4ac7-8566-83b3704ef12e" (UID: "f3fccf89-b010-4ac7-8566-83b3704ef12e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.492357 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fccf89-b010-4ac7-8566-83b3704ef12e-kube-api-access-f69lx" (OuterVolumeSpecName: "kube-api-access-f69lx") pod "f3fccf89-b010-4ac7-8566-83b3704ef12e" (UID: "f3fccf89-b010-4ac7-8566-83b3704ef12e"). InnerVolumeSpecName "kube-api-access-f69lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.523923 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3fccf89-b010-4ac7-8566-83b3704ef12e" (UID: "f3fccf89-b010-4ac7-8566-83b3704ef12e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.559314 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-config-data" (OuterVolumeSpecName: "config-data") pod "f3fccf89-b010-4ac7-8566-83b3704ef12e" (UID: "f3fccf89-b010-4ac7-8566-83b3704ef12e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.580335 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f3fccf89-b010-4ac7-8566-83b3704ef12e" (UID: "f3fccf89-b010-4ac7-8566-83b3704ef12e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.583774 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.583818 4688 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.583858 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3fccf89-b010-4ac7-8566-83b3704ef12e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.583870 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3fccf89-b010-4ac7-8566-83b3704ef12e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:20 crc kubenswrapper[4688]: I0123 18:28:20.583882 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f69lx\" (UniqueName: \"kubernetes.io/projected/f3fccf89-b010-4ac7-8566-83b3704ef12e-kube-api-access-f69lx\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.253632 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbbd26aa-7783-4958-95d0-a590f636947c","Type":"ContainerStarted","Data":"8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae"} Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.257053 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xmgh7" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.260927 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"f3fccf89-b010-4ac7-8566-83b3704ef12e","Type":"ContainerDied","Data":"b4c24ea39c74bba63efe6d737ad095d639fe161dd90c941f3a18e8223e11cb70"} Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.261017 4688 scope.go:117] "RemoveContainer" containerID="82a9398a03cb91d3d33d9f3e3f9c37bc2915a944bbde249fb5d6d83eb649f6c6" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.261398 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.331449 4688 scope.go:117] "RemoveContainer" containerID="4271017062f5efe1ad440674dd96ee3294ac5698541fecfc2b277c745aabfb91" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.400304 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.400346 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.418418 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:28:21 crc kubenswrapper[4688]: E0123 18:28:21.418910 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc227102-c953-4a8b-bfc2-918b63e457c1" containerName="barbican-db-sync" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.418926 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc227102-c953-4a8b-bfc2-918b63e457c1" containerName="barbican-db-sync" Jan 23 18:28:21 crc kubenswrapper[4688]: E0123 18:28:21.418936 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api-log" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.418943 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api-log" Jan 23 18:28:21 crc kubenswrapper[4688]: E0123 18:28:21.418962 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.418968 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.419144 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api-log" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.419161 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc227102-c953-4a8b-bfc2-918b63e457c1" containerName="barbican-db-sync" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.419175 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" containerName="watcher-api" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.420369 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.430080 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.430371 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.430528 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.461131 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.542074 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.542232 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.542264 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-config-data\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.542292 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.542325 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbhct\" (UniqueName: \"kubernetes.io/projected/ded0f19f-c836-47bf-83f9-88634d30f76d-kube-api-access-kbhct\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.542480 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded0f19f-c836-47bf-83f9-88634d30f76d-logs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.542559 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.556473 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-775f789f8-94pvr"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.558472 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.568954 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-htzlw" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.569199 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.575112 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.575370 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-775f789f8-94pvr"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.602515 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-57fb8477df-2m7ng"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.604808 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.614266 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57fb8477df-2m7ng"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.633739 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.644524 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.644655 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.644694 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-config-data\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.644724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.644755 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbhct\" (UniqueName: \"kubernetes.io/projected/ded0f19f-c836-47bf-83f9-88634d30f76d-kube-api-access-kbhct\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.644848 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded0f19f-c836-47bf-83f9-88634d30f76d-logs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.644885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.654392 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ded0f19f-c836-47bf-83f9-88634d30f76d-logs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.655473 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-config-data\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.655817 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.663208 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.663955 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.690123 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ded0f19f-c836-47bf-83f9-88634d30f76d-public-tls-certs\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.701041 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbhct\" (UniqueName: \"kubernetes.io/projected/ded0f19f-c836-47bf-83f9-88634d30f76d-kube-api-access-kbhct\") pod \"watcher-api-0\" (UID: \"ded0f19f-c836-47bf-83f9-88634d30f76d\") " pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.764303 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-config-data-custom\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.764377 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-config-data-custom\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.764418 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-combined-ca-bundle\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.764474 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69811c17-16d3-41e2-b891-6acdfeb480b0-logs\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.764723 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-combined-ca-bundle\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.764944 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w24v\" (UniqueName: \"kubernetes.io/projected/c28c58c6-022f-44fc-878a-92a0ad162488-kube-api-access-7w24v\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.765100 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fplhn\" (UniqueName: \"kubernetes.io/projected/69811c17-16d3-41e2-b891-6acdfeb480b0-kube-api-access-fplhn\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.765300 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-config-data\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.765398 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c58c6-022f-44fc-878a-92a0ad162488-logs\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.765464 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-config-data\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.777976 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bk5ht"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.779518 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" containerName="dnsmasq-dns" containerID="cri-o://0aa312577198ac4cb613f9a993b263d15bc23e3c11e71197e5345ece124f03e1" gracePeriod=10 Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.783308 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.809875 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.814036 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kwrg8"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.816581 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.853702 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kwrg8"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.867866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-config\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.867942 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fplhn\" (UniqueName: \"kubernetes.io/projected/69811c17-16d3-41e2-b891-6acdfeb480b0-kube-api-access-fplhn\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868049 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-config-data\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868100 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868131 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c58c6-022f-44fc-878a-92a0ad162488-logs\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868216 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-config-data\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868243 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868304 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-config-data-custom\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868331 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-config-data-custom\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.868361 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-combined-ca-bundle\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.869075 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs6dn\" (UniqueName: \"kubernetes.io/projected/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-kube-api-access-qs6dn\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.869115 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.869148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69811c17-16d3-41e2-b891-6acdfeb480b0-logs\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.869218 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-svc\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.869326 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-combined-ca-bundle\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.869415 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w24v\" (UniqueName: \"kubernetes.io/projected/c28c58c6-022f-44fc-878a-92a0ad162488-kube-api-access-7w24v\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.870363 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c28c58c6-022f-44fc-878a-92a0ad162488-logs\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.871115 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69811c17-16d3-41e2-b891-6acdfeb480b0-logs\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.890862 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d568b8954-n7nkz"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.893059 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.903084 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.908321 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fplhn\" (UniqueName: \"kubernetes.io/projected/69811c17-16d3-41e2-b891-6acdfeb480b0-kube-api-access-fplhn\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.915058 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w24v\" (UniqueName: \"kubernetes.io/projected/c28c58c6-022f-44fc-878a-92a0ad162488-kube-api-access-7w24v\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.924217 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-config-data-custom\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.928082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-config-data-custom\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.931291 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-config-data\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.932892 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-combined-ca-bundle\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.935770 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69811c17-16d3-41e2-b891-6acdfeb480b0-config-data\") pod \"barbican-keystone-listener-775f789f8-94pvr\" (UID: \"69811c17-16d3-41e2-b891-6acdfeb480b0\") " pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.942052 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c28c58c6-022f-44fc-878a-92a0ad162488-combined-ca-bundle\") pod \"barbican-worker-57fb8477df-2m7ng\" (UID: \"c28c58c6-022f-44fc-878a-92a0ad162488\") " pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.963840 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d568b8954-n7nkz"] Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971463 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971572 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs6dn\" (UniqueName: \"kubernetes.io/projected/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-kube-api-access-qs6dn\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971605 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971654 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-svc\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-combined-ca-bundle\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971792 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2k4w\" (UniqueName: \"kubernetes.io/projected/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-kube-api-access-n2k4w\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971842 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971874 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-config\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.971970 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-logs\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.972003 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data-custom\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.972029 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.973084 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.973861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.974984 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.975688 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-svc\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.976431 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-config\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:21 crc kubenswrapper[4688]: I0123 18:28:21.998662 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs6dn\" (UniqueName: \"kubernetes.io/projected/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-kube-api-access-qs6dn\") pod \"dnsmasq-dns-85ff748b95-kwrg8\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.074562 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2k4w\" (UniqueName: \"kubernetes.io/projected/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-kube-api-access-n2k4w\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.074681 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.074830 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-logs\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.074868 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data-custom\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.075253 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-combined-ca-bundle\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.078651 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-logs\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.090642 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data-custom\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.091687 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-combined-ca-bundle\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.108519 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57fb8477df-2m7ng" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.109218 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.125655 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2k4w\" (UniqueName: \"kubernetes.io/projected/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-kube-api-access-n2k4w\") pod \"barbican-api-6d568b8954-n7nkz\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.158485 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.163780 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.166037 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.200354 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-775f789f8-94pvr" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.283074 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.304202 4688 generic.go:334] "Generic (PLEG): container finished" podID="b8d25eb5-0041-42b6-8b61-ad9e728c3049" containerID="981ba849cc6952d6d50d67b9dd1872de9bbbc764ac40c171480863f34d78f347" exitCode=0 Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.304307 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vsp8t" event={"ID":"b8d25eb5-0041-42b6-8b61-ad9e728c3049","Type":"ContainerDied","Data":"981ba849cc6952d6d50d67b9dd1872de9bbbc764ac40c171480863f34d78f347"} Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.365254 4688 generic.go:334] "Generic (PLEG): container finished" podID="98927a20-b6a0-4442-8168-dfafa76fce14" containerID="0aa312577198ac4cb613f9a993b263d15bc23e3c11e71197e5345ece124f03e1" exitCode=0 Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.365572 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" event={"ID":"98927a20-b6a0-4442-8168-dfafa76fce14","Type":"ContainerDied","Data":"0aa312577198ac4cb613f9a993b263d15bc23e3c11e71197e5345ece124f03e1"} Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.381697 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.397108 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.397192 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.670769 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.798478 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-swift-storage-0\") pod \"98927a20-b6a0-4442-8168-dfafa76fce14\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.798544 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-config\") pod \"98927a20-b6a0-4442-8168-dfafa76fce14\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.798820 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np2cb\" (UniqueName: \"kubernetes.io/projected/98927a20-b6a0-4442-8168-dfafa76fce14-kube-api-access-np2cb\") pod \"98927a20-b6a0-4442-8168-dfafa76fce14\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.798881 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-sb\") pod \"98927a20-b6a0-4442-8168-dfafa76fce14\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.799074 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-svc\") pod \"98927a20-b6a0-4442-8168-dfafa76fce14\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.799769 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-nb\") pod \"98927a20-b6a0-4442-8168-dfafa76fce14\" (UID: \"98927a20-b6a0-4442-8168-dfafa76fce14\") " Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.813115 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98927a20-b6a0-4442-8168-dfafa76fce14-kube-api-access-np2cb" (OuterVolumeSpecName: "kube-api-access-np2cb") pod "98927a20-b6a0-4442-8168-dfafa76fce14" (UID: "98927a20-b6a0-4442-8168-dfafa76fce14"). InnerVolumeSpecName "kube-api-access-np2cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.870439 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 18:28:22 crc kubenswrapper[4688]: I0123 18:28:22.904076 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np2cb\" (UniqueName: \"kubernetes.io/projected/98927a20-b6a0-4442-8168-dfafa76fce14-kube-api-access-np2cb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.002980 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98927a20-b6a0-4442-8168-dfafa76fce14" (UID: "98927a20-b6a0-4442-8168-dfafa76fce14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.007029 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.034296 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98927a20-b6a0-4442-8168-dfafa76fce14" (UID: "98927a20-b6a0-4442-8168-dfafa76fce14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.047510 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98927a20-b6a0-4442-8168-dfafa76fce14" (UID: "98927a20-b6a0-4442-8168-dfafa76fce14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.047720 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98927a20-b6a0-4442-8168-dfafa76fce14" (UID: "98927a20-b6a0-4442-8168-dfafa76fce14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.049258 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-config" (OuterVolumeSpecName: "config") pod "98927a20-b6a0-4442-8168-dfafa76fce14" (UID: "98927a20-b6a0-4442-8168-dfafa76fce14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.110107 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.110156 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.110860 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.110873 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98927a20-b6a0-4442-8168-dfafa76fce14-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.147807 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57fb8477df-2m7ng"] Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.388649 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fccf89-b010-4ac7-8566-83b3704ef12e" path="/var/lib/kubelet/pods/f3fccf89-b010-4ac7-8566-83b3704ef12e/volumes" Jan 23 18:28:23 crc kubenswrapper[4688]: W0123 18:28:23.393643 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74cd1c9b_f11b_40c9_a5b5_edc28c7c3c4d.slice/crio-a3c6d30ff5b39a73cbc2708efbc7c3dca2f9f5811a1b9f1b904043bc001ecede WatchSource:0}: Error finding container a3c6d30ff5b39a73cbc2708efbc7c3dca2f9f5811a1b9f1b904043bc001ecede: Status 404 returned error can't find the container with id a3c6d30ff5b39a73cbc2708efbc7c3dca2f9f5811a1b9f1b904043bc001ecede Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.397835 4688 generic.go:334] "Generic (PLEG): container finished" podID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerID="e4e12533f97b009396b78264b0386f9f0c7ebea268eacf6a4cd992fafe1c0b95" exitCode=137 Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.397865 4688 generic.go:334] "Generic (PLEG): container finished" podID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerID="67eddeec582d2097fd83ccf70d7b625bb5d777a4f0a668b075319de94028c377" exitCode=137 Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.405091 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.427867 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kwrg8"] Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.427909 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-775f789f8-94pvr"] Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.427926 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fdb6575c-9fggx" event={"ID":"51bf7ae1-482b-45a8-b540-8282f867b3c8","Type":"ContainerDied","Data":"e4e12533f97b009396b78264b0386f9f0c7ebea268eacf6a4cd992fafe1c0b95"} Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.427956 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fdb6575c-9fggx" event={"ID":"51bf7ae1-482b-45a8-b540-8282f867b3c8","Type":"ContainerDied","Data":"67eddeec582d2097fd83ccf70d7b625bb5d777a4f0a668b075319de94028c377"} Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.427970 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ded0f19f-c836-47bf-83f9-88634d30f76d","Type":"ContainerStarted","Data":"ffdc841c56dd68dfe6212fa1b6e16fe6652fe1ca0c6eb1dd49a028914699fb31"} Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.427983 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" event={"ID":"98927a20-b6a0-4442-8168-dfafa76fce14","Type":"ContainerDied","Data":"10fc11df646d6fad8f75c6a20ef0caf9966f04f394595603717d4232ad4b8ff5"} Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.428010 4688 scope.go:117] "RemoveContainer" containerID="0aa312577198ac4cb613f9a993b263d15bc23e3c11e71197e5345ece124f03e1" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.428814 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57fb8477df-2m7ng" event={"ID":"c28c58c6-022f-44fc-878a-92a0ad162488","Type":"ContainerStarted","Data":"0114f3c8a4bc9dd3393d422896700e1cefe3cc9f0909cc822e51dbe4ab26f920"} Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.474391 4688 scope.go:117] "RemoveContainer" containerID="f7f2c9ad5ea27d4f0ea9b96de111d6a883d1334fc7962bc013dae52841cfd8a8" Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.482285 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bk5ht"] Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.504475 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-bk5ht"] Jan 23 18:28:23 crc kubenswrapper[4688]: I0123 18:28:23.716120 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d568b8954-n7nkz"] Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.472554 4688 generic.go:334] "Generic (PLEG): container finished" podID="c4a402bb-fae6-4f62-b956-eca577195a79" containerID="62ce1f46b085d1a812e5e3acd914ad43e5d2e2086f7695ec92bbe00cb3ba9c5d" exitCode=137 Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.473290 4688 generic.go:334] "Generic (PLEG): container finished" podID="c4a402bb-fae6-4f62-b956-eca577195a79" containerID="10452553e627ad3a98a6ca4d955f1f3c9b427d8afde7af56ad4e6603f763a129" exitCode=137 Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.473107 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f84479849-glxjc" event={"ID":"c4a402bb-fae6-4f62-b956-eca577195a79","Type":"ContainerDied","Data":"62ce1f46b085d1a812e5e3acd914ad43e5d2e2086f7695ec92bbe00cb3ba9c5d"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.473489 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f84479849-glxjc" event={"ID":"c4a402bb-fae6-4f62-b956-eca577195a79","Type":"ContainerDied","Data":"10452553e627ad3a98a6ca4d955f1f3c9b427d8afde7af56ad4e6603f763a129"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.481917 4688 generic.go:334] "Generic (PLEG): container finished" podID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerID="dfef96c67222e59404f1f91845c00af32707036ed6386a25b490347355f06b16" exitCode=0 Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.481994 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" event={"ID":"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d","Type":"ContainerDied","Data":"dfef96c67222e59404f1f91845c00af32707036ed6386a25b490347355f06b16"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.482029 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" event={"ID":"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d","Type":"ContainerStarted","Data":"a3c6d30ff5b39a73cbc2708efbc7c3dca2f9f5811a1b9f1b904043bc001ecede"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.499252 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775f789f8-94pvr" event={"ID":"69811c17-16d3-41e2-b891-6acdfeb480b0","Type":"ContainerStarted","Data":"a3576242eea1f33f3ab4fd9c7d7b20357872d30536fac3b76062c0cffbf0525d"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.508065 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d568b8954-n7nkz" event={"ID":"3f552eda-6ccb-41a6-a9ee-47dc4350d3da","Type":"ContainerStarted","Data":"5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.508115 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d568b8954-n7nkz" event={"ID":"3f552eda-6ccb-41a6-a9ee-47dc4350d3da","Type":"ContainerStarted","Data":"4a4481b2d4861f09b25225e0f3699291229de9ec594f5af0cf727b51917edeb4"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.510123 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68fdb6575c-9fggx" event={"ID":"51bf7ae1-482b-45a8-b540-8282f867b3c8","Type":"ContainerDied","Data":"7110f407018500af55c43e50ebae9257c20211a49e7daadfece13ffa03e78c5e"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.510157 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7110f407018500af55c43e50ebae9257c20211a49e7daadfece13ffa03e78c5e" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.514239 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ded0f19f-c836-47bf-83f9-88634d30f76d","Type":"ContainerStarted","Data":"dcd169134955f48b48c857f427da0e3283acb49fa5244312e8b77637299d2335"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.526043 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vsp8t" event={"ID":"b8d25eb5-0041-42b6-8b61-ad9e728c3049","Type":"ContainerDied","Data":"70b9b926e2e9aa7e2e94519195cc000a324f771c949e87431d22a0e28e611e37"} Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.526089 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b9b926e2e9aa7e2e94519195cc000a324f771c949e87431d22a0e28e611e37" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.615016 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.698170 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-config-data\") pod \"51bf7ae1-482b-45a8-b540-8282f867b3c8\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.698271 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51bf7ae1-482b-45a8-b540-8282f867b3c8-horizon-secret-key\") pod \"51bf7ae1-482b-45a8-b540-8282f867b3c8\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.698400 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-scripts\") pod \"51bf7ae1-482b-45a8-b540-8282f867b3c8\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.698422 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bf7ae1-482b-45a8-b540-8282f867b3c8-logs\") pod \"51bf7ae1-482b-45a8-b540-8282f867b3c8\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.698452 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8p2t\" (UniqueName: \"kubernetes.io/projected/51bf7ae1-482b-45a8-b540-8282f867b3c8-kube-api-access-j8p2t\") pod \"51bf7ae1-482b-45a8-b540-8282f867b3c8\" (UID: \"51bf7ae1-482b-45a8-b540-8282f867b3c8\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.705467 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51bf7ae1-482b-45a8-b540-8282f867b3c8-logs" (OuterVolumeSpecName: "logs") pod "51bf7ae1-482b-45a8-b540-8282f867b3c8" (UID: "51bf7ae1-482b-45a8-b540-8282f867b3c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.708116 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bf7ae1-482b-45a8-b540-8282f867b3c8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "51bf7ae1-482b-45a8-b540-8282f867b3c8" (UID: "51bf7ae1-482b-45a8-b540-8282f867b3c8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.746039 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bf7ae1-482b-45a8-b540-8282f867b3c8-kube-api-access-j8p2t" (OuterVolumeSpecName: "kube-api-access-j8p2t") pod "51bf7ae1-482b-45a8-b540-8282f867b3c8" (UID: "51bf7ae1-482b-45a8-b540-8282f867b3c8"). InnerVolumeSpecName "kube-api-access-j8p2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.770780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-config-data" (OuterVolumeSpecName: "config-data") pod "51bf7ae1-482b-45a8-b540-8282f867b3c8" (UID: "51bf7ae1-482b-45a8-b540-8282f867b3c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.780685 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-scripts" (OuterVolumeSpecName: "scripts") pod "51bf7ae1-482b-45a8-b540-8282f867b3c8" (UID: "51bf7ae1-482b-45a8-b540-8282f867b3c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.810892 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.810930 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51bf7ae1-482b-45a8-b540-8282f867b3c8-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.810942 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8p2t\" (UniqueName: \"kubernetes.io/projected/51bf7ae1-482b-45a8-b540-8282f867b3c8-kube-api-access-j8p2t\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.810954 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51bf7ae1-482b-45a8-b540-8282f867b3c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.810966 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51bf7ae1-482b-45a8-b540-8282f867b3c8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.819251 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.912417 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8d25eb5-0041-42b6-8b61-ad9e728c3049-etc-machine-id\") pod \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.912546 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-config-data\") pod \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.912694 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-scripts\") pod \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.912829 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-db-sync-config-data\") pod \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.912922 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7h62\" (UniqueName: \"kubernetes.io/projected/b8d25eb5-0041-42b6-8b61-ad9e728c3049-kube-api-access-k7h62\") pod \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.912960 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-combined-ca-bundle\") pod \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\" (UID: \"b8d25eb5-0041-42b6-8b61-ad9e728c3049\") " Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.914734 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d25eb5-0041-42b6-8b61-ad9e728c3049-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b8d25eb5-0041-42b6-8b61-ad9e728c3049" (UID: "b8d25eb5-0041-42b6-8b61-ad9e728c3049"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.936747 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d25eb5-0041-42b6-8b61-ad9e728c3049-kube-api-access-k7h62" (OuterVolumeSpecName: "kube-api-access-k7h62") pod "b8d25eb5-0041-42b6-8b61-ad9e728c3049" (UID: "b8d25eb5-0041-42b6-8b61-ad9e728c3049"). InnerVolumeSpecName "kube-api-access-k7h62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.944305 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b8d25eb5-0041-42b6-8b61-ad9e728c3049" (UID: "b8d25eb5-0041-42b6-8b61-ad9e728c3049"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:24 crc kubenswrapper[4688]: I0123 18:28:24.944461 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-scripts" (OuterVolumeSpecName: "scripts") pod "b8d25eb5-0041-42b6-8b61-ad9e728c3049" (UID: "b8d25eb5-0041-42b6-8b61-ad9e728c3049"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.017667 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7h62\" (UniqueName: \"kubernetes.io/projected/b8d25eb5-0041-42b6-8b61-ad9e728c3049-kube-api-access-k7h62\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.017696 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8d25eb5-0041-42b6-8b61-ad9e728c3049-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.017705 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.017713 4688 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.063989 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8d25eb5-0041-42b6-8b61-ad9e728c3049" (UID: "b8d25eb5-0041-42b6-8b61-ad9e728c3049"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.124968 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.174662 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-config-data" (OuterVolumeSpecName: "config-data") pod "b8d25eb5-0041-42b6-8b61-ad9e728c3049" (UID: "b8d25eb5-0041-42b6-8b61-ad9e728c3049"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.234110 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8d25eb5-0041-42b6-8b61-ad9e728c3049-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.461669 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" path="/var/lib/kubelet/pods/98927a20-b6a0-4442-8168-dfafa76fce14/volumes" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.559411 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f84479849-glxjc" event={"ID":"c4a402bb-fae6-4f62-b956-eca577195a79","Type":"ContainerDied","Data":"df52a345b04c76e97e1bf24061d563555f3668fad365287cffb9a6eb68dabb55"} Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.559521 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df52a345b04c76e97e1bf24061d563555f3668fad365287cffb9a6eb68dabb55" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.583060 4688 generic.go:334] "Generic (PLEG): container finished" podID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerID="ec8b8bc91a588637f13d00296fe17148bc41ebc794d46b44eacef30eeb89bdfc" exitCode=137 Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.583098 4688 generic.go:334] "Generic (PLEG): container finished" podID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerID="f50d1e80a06eec536832077b72b853b8f2f951ab308fcd61b2907dae5d9e0569" exitCode=137 Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.583198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558dd665cf-xhjvb" event={"ID":"abcf3d4c-7571-4b15-8b71-2ad279c56c87","Type":"ContainerDied","Data":"ec8b8bc91a588637f13d00296fe17148bc41ebc794d46b44eacef30eeb89bdfc"} Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.583235 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558dd665cf-xhjvb" event={"ID":"abcf3d4c-7571-4b15-8b71-2ad279c56c87","Type":"ContainerDied","Data":"f50d1e80a06eec536832077b72b853b8f2f951ab308fcd61b2907dae5d9e0569"} Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.593260 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vsp8t" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.596022 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ded0f19f-c836-47bf-83f9-88634d30f76d","Type":"ContainerStarted","Data":"666fe288c35ada048f2ef23268cf2c23dd30ee901bc28ec4329534c7457e43e3"} Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.596069 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.596133 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68fdb6575c-9fggx" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.603698 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ded0f19f-c836-47bf-83f9-88634d30f76d" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.171:9322/\": dial tcp 10.217.0.171:9322: connect: connection refused" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.622667 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.667044 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgdqz\" (UniqueName: \"kubernetes.io/projected/c4a402bb-fae6-4f62-b956-eca577195a79-kube-api-access-xgdqz\") pod \"c4a402bb-fae6-4f62-b956-eca577195a79\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.667126 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-config-data\") pod \"c4a402bb-fae6-4f62-b956-eca577195a79\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.667179 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.667159698 podStartE2EDuration="4.667159698s" podCreationTimestamp="2026-01-23 18:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:25.64890196 +0000 UTC m=+1300.644726421" watchObservedRunningTime="2026-01-23 18:28:25.667159698 +0000 UTC m=+1300.662984149" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.667256 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4a402bb-fae6-4f62-b956-eca577195a79-logs\") pod \"c4a402bb-fae6-4f62-b956-eca577195a79\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.667336 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4a402bb-fae6-4f62-b956-eca577195a79-horizon-secret-key\") pod \"c4a402bb-fae6-4f62-b956-eca577195a79\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.667427 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts\") pod \"c4a402bb-fae6-4f62-b956-eca577195a79\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.682013 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a402bb-fae6-4f62-b956-eca577195a79-logs" (OuterVolumeSpecName: "logs") pod "c4a402bb-fae6-4f62-b956-eca577195a79" (UID: "c4a402bb-fae6-4f62-b956-eca577195a79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.690056 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.690971 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a402bb-fae6-4f62-b956-eca577195a79-kube-api-access-xgdqz" (OuterVolumeSpecName: "kube-api-access-xgdqz") pod "c4a402bb-fae6-4f62-b956-eca577195a79" (UID: "c4a402bb-fae6-4f62-b956-eca577195a79"). InnerVolumeSpecName "kube-api-access-xgdqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.692401 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a402bb-fae6-4f62-b956-eca577195a79-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c4a402bb-fae6-4f62-b956-eca577195a79" (UID: "c4a402bb-fae6-4f62-b956-eca577195a79"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: E0123 18:28:25.739301 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts podName:c4a402bb-fae6-4f62-b956-eca577195a79 nodeName:}" failed. No retries permitted until 2026-01-23 18:28:26.239269405 +0000 UTC m=+1301.235093846 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "scripts" (UniqueName: "kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts") pod "c4a402bb-fae6-4f62-b956-eca577195a79" (UID: "c4a402bb-fae6-4f62-b956-eca577195a79") : error deleting /var/lib/kubelet/pods/c4a402bb-fae6-4f62-b956-eca577195a79/volume-subpaths: remove /var/lib/kubelet/pods/c4a402bb-fae6-4f62-b956-eca577195a79/volume-subpaths: no such file or directory Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.741567 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-config-data" (OuterVolumeSpecName: "config-data") pod "c4a402bb-fae6-4f62-b956-eca577195a79" (UID: "c4a402bb-fae6-4f62-b956-eca577195a79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.772051 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld6mg\" (UniqueName: \"kubernetes.io/projected/abcf3d4c-7571-4b15-8b71-2ad279c56c87-kube-api-access-ld6mg\") pod \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.772119 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-config-data\") pod \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.772348 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf3d4c-7571-4b15-8b71-2ad279c56c87-logs\") pod \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.772424 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-scripts\") pod \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.772457 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abcf3d4c-7571-4b15-8b71-2ad279c56c87-horizon-secret-key\") pod \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\" (UID: \"abcf3d4c-7571-4b15-8b71-2ad279c56c87\") " Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.772754 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abcf3d4c-7571-4b15-8b71-2ad279c56c87-logs" (OuterVolumeSpecName: "logs") pod "abcf3d4c-7571-4b15-8b71-2ad279c56c87" (UID: "abcf3d4c-7571-4b15-8b71-2ad279c56c87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.773159 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgdqz\" (UniqueName: \"kubernetes.io/projected/c4a402bb-fae6-4f62-b956-eca577195a79-kube-api-access-xgdqz\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.773203 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.773213 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4a402bb-fae6-4f62-b956-eca577195a79-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.773221 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4a402bb-fae6-4f62-b956-eca577195a79-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.773229 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abcf3d4c-7571-4b15-8b71-2ad279c56c87-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.789344 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcf3d4c-7571-4b15-8b71-2ad279c56c87-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "abcf3d4c-7571-4b15-8b71-2ad279c56c87" (UID: "abcf3d4c-7571-4b15-8b71-2ad279c56c87"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.789400 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcf3d4c-7571-4b15-8b71-2ad279c56c87-kube-api-access-ld6mg" (OuterVolumeSpecName: "kube-api-access-ld6mg") pod "abcf3d4c-7571-4b15-8b71-2ad279c56c87" (UID: "abcf3d4c-7571-4b15-8b71-2ad279c56c87"). InnerVolumeSpecName "kube-api-access-ld6mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.823233 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-68fdb6575c-9fggx"] Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.824900 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-scripts" (OuterVolumeSpecName: "scripts") pod "abcf3d4c-7571-4b15-8b71-2ad279c56c87" (UID: "abcf3d4c-7571-4b15-8b71-2ad279c56c87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.833918 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-config-data" (OuterVolumeSpecName: "config-data") pod "abcf3d4c-7571-4b15-8b71-2ad279c56c87" (UID: "abcf3d4c-7571-4b15-8b71-2ad279c56c87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.858132 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-68fdb6575c-9fggx"] Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.888542 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.888574 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/abcf3d4c-7571-4b15-8b71-2ad279c56c87-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.888584 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld6mg\" (UniqueName: \"kubernetes.io/projected/abcf3d4c-7571-4b15-8b71-2ad279c56c87-kube-api-access-ld6mg\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:25 crc kubenswrapper[4688]: I0123 18:28:25.888595 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abcf3d4c-7571-4b15-8b71-2ad279c56c87-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.149895 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150441 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150453 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150467 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d25eb5-0041-42b6-8b61-ad9e728c3049" containerName="cinder-db-sync" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150473 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d25eb5-0041-42b6-8b61-ad9e728c3049" containerName="cinder-db-sync" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150486 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" containerName="dnsmasq-dns" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150497 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" containerName="dnsmasq-dns" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150516 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150522 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150534 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150539 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150565 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150571 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150582 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150596 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150610 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150616 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.150625 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" containerName="init" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150631 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" containerName="init" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150811 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150829 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150839 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" containerName="dnsmasq-dns" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150850 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150860 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150868 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150878 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" containerName="horizon-log" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.150885 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d25eb5-0041-42b6-8b61-ad9e728c3049" containerName="cinder-db-sync" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.152045 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.158800 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.159261 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2f2qs" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.163772 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.164041 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.182561 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.212426 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vw7m\" (UniqueName: \"kubernetes.io/projected/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-kube-api-access-5vw7m\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.212654 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.212687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.212781 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.212813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.212837 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.315311 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts\") pod \"c4a402bb-fae6-4f62-b956-eca577195a79\" (UID: \"c4a402bb-fae6-4f62-b956-eca577195a79\") " Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.315734 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.315771 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.315796 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.315912 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vw7m\" (UniqueName: \"kubernetes.io/projected/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-kube-api-access-5vw7m\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.316006 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.316032 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.316072 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kwrg8"] Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.316158 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.320766 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts" (OuterVolumeSpecName: "scripts") pod "c4a402bb-fae6-4f62-b956-eca577195a79" (UID: "c4a402bb-fae6-4f62-b956-eca577195a79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.334098 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.336890 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.339818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.368281 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.380004 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cj4zt"] Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.395246 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.397774 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vw7m\" (UniqueName: \"kubernetes.io/projected/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-kube-api-access-5vw7m\") pod \"cinder-scheduler-0\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.412257 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cj4zt"] Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.429393 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4a402bb-fae6-4f62-b956-eca577195a79-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.522893 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.533788 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.533863 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.533920 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tc6k\" (UniqueName: \"kubernetes.io/projected/25533537-7bbc-4377-8701-d21ec7b1f226-kube-api-access-4tc6k\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.533990 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.534088 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.534142 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-config\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.538130 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9696bf65d-hqqnw"] Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.542294 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: W0123 18:28:26.571933 4688 reflector.go:561] object-"openstack"/"cert-barbican-internal-svc": failed to list *v1.Secret: secrets "cert-barbican-internal-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.571996 4688 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-internal-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-barbican-internal-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 18:28:26 crc kubenswrapper[4688]: W0123 18:28:26.572096 4688 reflector.go:561] object-"openstack"/"cert-barbican-public-svc": failed to list *v1.Secret: secrets "cert-barbican-public-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 23 18:28:26 crc kubenswrapper[4688]: E0123 18:28:26.572112 4688 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-barbican-public-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-barbican-public-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.633248 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9696bf65d-hqqnw"] Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.637692 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-config-data\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.637772 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-config-data-custom\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.637906 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn6zl\" (UniqueName: \"kubernetes.io/projected/26d17642-a159-4c56-85da-4ce111096230-kube-api-access-vn6zl\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.637984 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638011 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638040 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tc6k\" (UniqueName: \"kubernetes.io/projected/25533537-7bbc-4377-8701-d21ec7b1f226-kube-api-access-4tc6k\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638079 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638139 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-internal-tls-certs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638175 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-combined-ca-bundle\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638239 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d17642-a159-4c56-85da-4ce111096230-logs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638260 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-public-tls-certs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638302 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.638345 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-config\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.639580 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-config\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.641439 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.642551 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.643419 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.656462 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.672684 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-558dd665cf-xhjvb" event={"ID":"abcf3d4c-7571-4b15-8b71-2ad279c56c87","Type":"ContainerDied","Data":"daeb51e104ca5c5fc510ead2f63ea721e51b847e8afe70cc4a21063348e4e0a6"} Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.672751 4688 scope.go:117] "RemoveContainer" containerID="ec8b8bc91a588637f13d00296fe17148bc41ebc794d46b44eacef30eeb89bdfc" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.673005 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-558dd665cf-xhjvb" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.729607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" event={"ID":"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d","Type":"ContainerStarted","Data":"7a19ad46276ccc97f70e42e622c0cb6556aa5b929cbbeecd31207f2d9e2b2e13"} Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.731148 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.741042 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-config-data\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.741102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-config-data-custom\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.741221 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn6zl\" (UniqueName: \"kubernetes.io/projected/26d17642-a159-4c56-85da-4ce111096230-kube-api-access-vn6zl\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.741383 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-internal-tls-certs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.741446 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-combined-ca-bundle\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.741516 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d17642-a159-4c56-85da-4ce111096230-logs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.741548 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-public-tls-certs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.750760 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-config-data\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.751087 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26d17642-a159-4c56-85da-4ce111096230-logs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.766039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-combined-ca-bundle\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.768334 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tc6k\" (UniqueName: \"kubernetes.io/projected/25533537-7bbc-4377-8701-d21ec7b1f226-kube-api-access-4tc6k\") pod \"dnsmasq-dns-5c9776ccc5-cj4zt\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.775993 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-config-data-custom\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.782616 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d568b8954-n7nkz" event={"ID":"3f552eda-6ccb-41a6-a9ee-47dc4350d3da","Type":"ContainerStarted","Data":"3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493"} Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.783056 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f84479849-glxjc" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.798610 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn6zl\" (UniqueName: \"kubernetes.io/projected/26d17642-a159-4c56-85da-4ce111096230-kube-api-access-vn6zl\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.805768 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" podStartSLOduration=5.805741611 podStartE2EDuration="5.805741611s" podCreationTimestamp="2026-01-23 18:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:26.771761398 +0000 UTC m=+1301.767585839" watchObservedRunningTime="2026-01-23 18:28:26.805741611 +0000 UTC m=+1301.801566052" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.812566 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 18:28:26 crc kubenswrapper[4688]: I0123 18:28:26.812902 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.049964 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.088585 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.114725 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.180445 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-558dd665cf-xhjvb"] Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.273429 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-558dd665cf-xhjvb"] Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.288283 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.289837 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.289885 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.289915 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.289998 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c198d0-1987-4562-9129-0df56fc666cb-logs\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.290029 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-scripts\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.290053 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbkq\" (UniqueName: \"kubernetes.io/projected/74c198d0-1987-4562-9129-0df56fc666cb-kube-api-access-vsbkq\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.290096 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74c198d0-1987-4562-9129-0df56fc666cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.318958 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d568b8954-n7nkz" podStartSLOduration=6.31892357 podStartE2EDuration="6.31892357s" podCreationTimestamp="2026-01-23 18:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:26.924087655 +0000 UTC m=+1301.919912116" watchObservedRunningTime="2026-01-23 18:28:27.31892357 +0000 UTC m=+1302.314748011" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.338436 4688 scope.go:117] "RemoveContainer" containerID="f50d1e80a06eec536832077b72b853b8f2f951ab308fcd61b2907dae5d9e0569" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.338798 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-bk5ht" podUID="98927a20-b6a0-4442-8168-dfafa76fce14" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.392744 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c198d0-1987-4562-9129-0df56fc666cb-logs\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.393399 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-scripts\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.394866 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbkq\" (UniqueName: \"kubernetes.io/projected/74c198d0-1987-4562-9129-0df56fc666cb-kube-api-access-vsbkq\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.395041 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74c198d0-1987-4562-9129-0df56fc666cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.395648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.403669 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.403977 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.394005 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c198d0-1987-4562-9129-0df56fc666cb-logs\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.405000 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74c198d0-1987-4562-9129-0df56fc666cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.407865 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-scripts\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.421067 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.424595 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bf7ae1-482b-45a8-b540-8282f867b3c8" path="/var/lib/kubelet/pods/51bf7ae1-482b-45a8-b540-8282f867b3c8/volumes" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.440285 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abcf3d4c-7571-4b15-8b71-2ad279c56c87" path="/var/lib/kubelet/pods/abcf3d4c-7571-4b15-8b71-2ad279c56c87/volumes" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.448374 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.455681 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbkq\" (UniqueName: \"kubernetes.io/projected/74c198d0-1987-4562-9129-0df56fc666cb-kube-api-access-vsbkq\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.457662 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.457788 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.458045 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f84479849-glxjc"] Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.459434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.505704 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f84479849-glxjc"] Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.589975 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.665704 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.689889 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-public-tls-certs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.707529 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:27 crc kubenswrapper[4688]: E0123 18:28:27.750369 4688 secret.go:188] Couldn't get secret openstack/cert-barbican-internal-svc: failed to sync secret cache: timed out waiting for the condition Jan 23 18:28:27 crc kubenswrapper[4688]: E0123 18:28:27.750489 4688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-internal-tls-certs podName:26d17642-a159-4c56-85da-4ce111096230 nodeName:}" failed. No retries permitted until 2026-01-23 18:28:28.250460335 +0000 UTC m=+1303.246284776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-internal-tls-certs") pod "barbican-api-9696bf65d-hqqnw" (UID: "26d17642-a159-4c56-85da-4ce111096230") : failed to sync secret cache: timed out waiting for the condition Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.821554 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 18:28:27 crc kubenswrapper[4688]: I0123 18:28:27.836397 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" podUID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerName="dnsmasq-dns" containerID="cri-o://7a19ad46276ccc97f70e42e622c0cb6556aa5b929cbbeecd31207f2d9e2b2e13" gracePeriod=10 Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.258985 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.259069 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.260035 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"edc9f72973727b10898539eabd6253423ace5c0db70c399aa7d84e12ce7541f6"} pod="openstack/horizon-c854fbb9b-lr4lr" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.260091 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" containerID="cri-o://edc9f72973727b10898539eabd6253423ace5c0db70c399aa7d84e12ce7541f6" gracePeriod=30 Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.317604 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-internal-tls-certs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.327172 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26d17642-a159-4c56-85da-4ce111096230-internal-tls-certs\") pod \"barbican-api-9696bf65d-hqqnw\" (UID: \"26d17642-a159-4c56-85da-4ce111096230\") " pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.385502 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.496869 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689f6b4f86-pbwfh" podUID="56f27597-f638-4b6d-84e9-3a3671c089ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.496971 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.498220 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"1a81bd590df2aec524c3ec13233f98ebdf699927b59ed3118001e95b865fe0d3"} pod="openstack/horizon-689f6b4f86-pbwfh" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.498267 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-689f6b4f86-pbwfh" podUID="56f27597-f638-4b6d-84e9-3a3671c089ac" containerName="horizon" containerID="cri-o://1a81bd590df2aec524c3ec13233f98ebdf699927b59ed3118001e95b865fe0d3" gracePeriod=30 Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.857694 4688 generic.go:334] "Generic (PLEG): container finished" podID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerID="7a19ad46276ccc97f70e42e622c0cb6556aa5b929cbbeecd31207f2d9e2b2e13" exitCode=0 Jan 23 18:28:28 crc kubenswrapper[4688]: I0123 18:28:28.858973 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" event={"ID":"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d","Type":"ContainerDied","Data":"7a19ad46276ccc97f70e42e622c0cb6556aa5b929cbbeecd31207f2d9e2b2e13"} Jan 23 18:28:29 crc kubenswrapper[4688]: I0123 18:28:29.373704 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a402bb-fae6-4f62-b956-eca577195a79" path="/var/lib/kubelet/pods/c4a402bb-fae6-4f62-b956-eca577195a79/volumes" Jan 23 18:28:30 crc kubenswrapper[4688]: I0123 18:28:30.667815 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cj4zt"] Jan 23 18:28:30 crc kubenswrapper[4688]: I0123 18:28:30.848919 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:28:30 crc kubenswrapper[4688]: I0123 18:28:30.898983 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f3d1bf6-3f81-478a-9b1b-704b8083ba41","Type":"ContainerStarted","Data":"ad695e4466eba20855a6ebebea1d45a72aa0485a59e1c32da1c3f9cbb884ca1d"} Jan 23 18:28:31 crc kubenswrapper[4688]: I0123 18:28:31.784903 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ded0f19f-c836-47bf-83f9-88634d30f76d" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.171:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:28:31 crc kubenswrapper[4688]: I0123 18:28:31.811627 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 18:28:31 crc kubenswrapper[4688]: I0123 18:28:31.816510 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="ded0f19f-c836-47bf-83f9-88634d30f76d" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.171:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.008976 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.011447 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" event={"ID":"25533537-7bbc-4377-8701-d21ec7b1f226","Type":"ContainerStarted","Data":"8918b521e3a27f4ac464299b296807604fdd8c69919420772d81f0004be3ea99"} Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.043687 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-config\") pod \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.043812 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-nb\") pod \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.043846 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-svc\") pod \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.043890 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs6dn\" (UniqueName: \"kubernetes.io/projected/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-kube-api-access-qs6dn\") pod \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.044062 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-swift-storage-0\") pod \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.044173 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-sb\") pod \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\" (UID: \"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d\") " Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.066645 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" event={"ID":"74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d","Type":"ContainerDied","Data":"a3c6d30ff5b39a73cbc2708efbc7c3dca2f9f5811a1b9f1b904043bc001ecede"} Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.066999 4688 scope.go:117] "RemoveContainer" containerID="7a19ad46276ccc97f70e42e622c0cb6556aa5b929cbbeecd31207f2d9e2b2e13" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.067237 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-kwrg8" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.082855 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-kube-api-access-qs6dn" (OuterVolumeSpecName: "kube-api-access-qs6dn") pod "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" (UID: "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d"). InnerVolumeSpecName "kube-api-access-qs6dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.150885 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs6dn\" (UniqueName: \"kubernetes.io/projected/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-kube-api-access-qs6dn\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.173030 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" (UID: "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.227355 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" (UID: "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.254325 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.256984 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.257703 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-config" (OuterVolumeSpecName: "config") pod "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" (UID: "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.378253 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.480009 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" (UID: "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.482501 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.584254 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" (UID: "74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.586783 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.727874 4688 scope.go:117] "RemoveContainer" containerID="dfef96c67222e59404f1f91845c00af32707036ed6386a25b490347355f06b16" Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.756415 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:28:32 crc kubenswrapper[4688]: I0123 18:28:32.820303 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="ded0f19f-c836-47bf-83f9-88634d30f76d" containerName="watcher-api-log" probeResult="failure" output="Get \"https://10.217.0.171:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:28:33 crc kubenswrapper[4688]: I0123 18:28:33.031135 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9696bf65d-hqqnw"] Jan 23 18:28:33 crc kubenswrapper[4688]: I0123 18:28:33.287982 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kwrg8"] Jan 23 18:28:33 crc kubenswrapper[4688]: I0123 18:28:33.314020 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-kwrg8"] Jan 23 18:28:33 crc kubenswrapper[4688]: I0123 18:28:33.374322 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" path="/var/lib/kubelet/pods/74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d/volumes" Jan 23 18:28:33 crc kubenswrapper[4688]: I0123 18:28:33.956786 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 18:28:34 crc kubenswrapper[4688]: I0123 18:28:34.146042 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9696bf65d-hqqnw" event={"ID":"26d17642-a159-4c56-85da-4ce111096230","Type":"ContainerStarted","Data":"13931175d2058b52d3586e4d6ebab1841fa667e0c3d6ad3eeffcff83627bab72"} Jan 23 18:28:34 crc kubenswrapper[4688]: I0123 18:28:34.146102 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9696bf65d-hqqnw" event={"ID":"26d17642-a159-4c56-85da-4ce111096230","Type":"ContainerStarted","Data":"e91a8f72cb455cec1d7cb4e7e23cde32a8d5bf48d1f444b03c140ad8b2a9d73d"} Jan 23 18:28:34 crc kubenswrapper[4688]: I0123 18:28:34.153780 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" event={"ID":"25533537-7bbc-4377-8701-d21ec7b1f226","Type":"ContainerStarted","Data":"ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c"} Jan 23 18:28:34 crc kubenswrapper[4688]: I0123 18:28:34.166539 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74c198d0-1987-4562-9129-0df56fc666cb","Type":"ContainerStarted","Data":"d4d3cdde31884b2d9cb7711f1dfc0bc0f5b06f29ca1d781fe9f4b8930881eb96"} Jan 23 18:28:34 crc kubenswrapper[4688]: I0123 18:28:34.189529 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57fb8477df-2m7ng" event={"ID":"c28c58c6-022f-44fc-878a-92a0ad162488","Type":"ContainerStarted","Data":"8fc5ef339d5de2dbca85471e098c2168ecd5e328089e8bd9e432fb22c275ed4c"} Jan 23 18:28:34 crc kubenswrapper[4688]: I0123 18:28:34.238932 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775f789f8-94pvr" event={"ID":"69811c17-16d3-41e2-b891-6acdfeb480b0","Type":"ContainerStarted","Data":"b3f149b3e0816f328ea528403e85cecf54ceb1af1ace9ed333651a82a2079c0c"} Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.292600 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f3d1bf6-3f81-478a-9b1b-704b8083ba41","Type":"ContainerStarted","Data":"81d4a6e4607fea27af644d4e4ec40d0a062d7c83702b66497500bc57f011b18f"} Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.300154 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-775f789f8-94pvr" event={"ID":"69811c17-16d3-41e2-b891-6acdfeb480b0","Type":"ContainerStarted","Data":"f660958fdf2d2edcb25746915ba4bb7693c4730621f5e89249a43b860777a7d7"} Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.306233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9696bf65d-hqqnw" event={"ID":"26d17642-a159-4c56-85da-4ce111096230","Type":"ContainerStarted","Data":"446fda054a1e5226651e210a52fd3d38805ee42e2d6f23262b1de6c1a3fc6657"} Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.306490 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.313522 4688 generic.go:334] "Generic (PLEG): container finished" podID="25533537-7bbc-4377-8701-d21ec7b1f226" containerID="ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c" exitCode=0 Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.313973 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" event={"ID":"25533537-7bbc-4377-8701-d21ec7b1f226","Type":"ContainerDied","Data":"ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c"} Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.325057 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-775f789f8-94pvr" podStartSLOduration=5.784586037 podStartE2EDuration="14.325025824s" podCreationTimestamp="2026-01-23 18:28:21 +0000 UTC" firstStartedPulling="2026-01-23 18:28:23.434230009 +0000 UTC m=+1298.430054450" lastFinishedPulling="2026-01-23 18:28:31.974669796 +0000 UTC m=+1306.970494237" observedRunningTime="2026-01-23 18:28:35.32353083 +0000 UTC m=+1310.319355311" watchObservedRunningTime="2026-01-23 18:28:35.325025824 +0000 UTC m=+1310.320850265" Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.415526 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74c198d0-1987-4562-9129-0df56fc666cb","Type":"ContainerStarted","Data":"ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b"} Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.474967 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57fb8477df-2m7ng" event={"ID":"c28c58c6-022f-44fc-878a-92a0ad162488","Type":"ContainerStarted","Data":"e8454b7ff88973dd07ce266b1e3651b5602e56cc7d237fa7a7b43771236dcaf0"} Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.489764 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9696bf65d-hqqnw" podStartSLOduration=9.489739758 podStartE2EDuration="9.489739758s" podCreationTimestamp="2026-01-23 18:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:35.421605087 +0000 UTC m=+1310.417429528" watchObservedRunningTime="2026-01-23 18:28:35.489739758 +0000 UTC m=+1310.485564199" Jan 23 18:28:35 crc kubenswrapper[4688]: I0123 18:28:35.566529 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-57fb8477df-2m7ng" podStartSLOduration=5.175411751 podStartE2EDuration="14.566503389s" podCreationTimestamp="2026-01-23 18:28:21 +0000 UTC" firstStartedPulling="2026-01-23 18:28:23.188396577 +0000 UTC m=+1298.184221018" lastFinishedPulling="2026-01-23 18:28:32.579488215 +0000 UTC m=+1307.575312656" observedRunningTime="2026-01-23 18:28:35.560610879 +0000 UTC m=+1310.556435330" watchObservedRunningTime="2026-01-23 18:28:35.566503389 +0000 UTC m=+1310.562327830" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.433662 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d568b8954-n7nkz" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.509679 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f3d1bf6-3f81-478a-9b1b-704b8083ba41","Type":"ContainerStarted","Data":"28e40239fe23335b6d523c6121bb712cf79ee3a48f53a9ff08011a74cc1d2a6c"} Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.524155 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.526875 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" event={"ID":"25533537-7bbc-4377-8701-d21ec7b1f226","Type":"ContainerStarted","Data":"2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c"} Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.533478 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.556399 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.595487502 podStartE2EDuration="10.55637211s" podCreationTimestamp="2026-01-23 18:28:26 +0000 UTC" firstStartedPulling="2026-01-23 18:28:29.949500891 +0000 UTC m=+1304.945325332" lastFinishedPulling="2026-01-23 18:28:32.910385499 +0000 UTC m=+1307.906209940" observedRunningTime="2026-01-23 18:28:36.54151805 +0000 UTC m=+1311.537342491" watchObservedRunningTime="2026-01-23 18:28:36.55637211 +0000 UTC m=+1311.552196551" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.574820 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74c198d0-1987-4562-9129-0df56fc666cb","Type":"ContainerStarted","Data":"24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79"} Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.575060 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api-log" containerID="cri-o://ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b" gracePeriod=30 Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.575281 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.575880 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api" containerID="cri-o://24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79" gracePeriod=30 Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.576596 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.597707 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" podStartSLOduration=10.597681225 podStartE2EDuration="10.597681225s" podCreationTimestamp="2026-01-23 18:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:36.574765192 +0000 UTC m=+1311.570589653" watchObservedRunningTime="2026-01-23 18:28:36.597681225 +0000 UTC m=+1311.593505666" Jan 23 18:28:36 crc kubenswrapper[4688]: I0123 18:28:36.630564 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=10.630545116 podStartE2EDuration="10.630545116s" podCreationTimestamp="2026-01-23 18:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:36.615368227 +0000 UTC m=+1311.611192688" watchObservedRunningTime="2026-01-23 18:28:36.630545116 +0000 UTC m=+1311.626369557" Jan 23 18:28:37 crc kubenswrapper[4688]: I0123 18:28:37.427719 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d568b8954-n7nkz" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:28:37 crc kubenswrapper[4688]: I0123 18:28:37.427769 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:37 crc kubenswrapper[4688]: I0123 18:28:37.595151 4688 generic.go:334] "Generic (PLEG): container finished" podID="74c198d0-1987-4562-9129-0df56fc666cb" containerID="ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b" exitCode=143 Jan 23 18:28:37 crc kubenswrapper[4688]: I0123 18:28:37.595233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74c198d0-1987-4562-9129-0df56fc666cb","Type":"ContainerDied","Data":"ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b"} Jan 23 18:28:37 crc kubenswrapper[4688]: I0123 18:28:37.706521 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:38 crc kubenswrapper[4688]: I0123 18:28:38.160162 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-788dd47598-8wt2n" Jan 23 18:28:40 crc kubenswrapper[4688]: I0123 18:28:40.429579 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.046651 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b698d98c-7kjns" Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.158254 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74745fc86b-bp676"] Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.158599 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74745fc86b-bp676" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-api" containerID="cri-o://42281437f42a8d5076828069066a7ffe9c922cf065ae269f8b8dc978c1065d51" gracePeriod=30 Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.158798 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74745fc86b-bp676" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-httpd" containerID="cri-o://e96ae46b2dd0b02798e0cca30a38caf8646f652cde0f2db376c5531fee3545a4" gracePeriod=30 Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.698052 4688 generic.go:334] "Generic (PLEG): container finished" podID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerID="e96ae46b2dd0b02798e0cca30a38caf8646f652cde0f2db376c5531fee3545a4" exitCode=0 Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.698109 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74745fc86b-bp676" event={"ID":"424368f6-fce1-4e7d-b400-9554ec6a4fd3","Type":"ContainerDied","Data":"e96ae46b2dd0b02798e0cca30a38caf8646f652cde0f2db376c5531fee3545a4"} Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.816363 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.831477 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.947381 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 18:28:41 crc kubenswrapper[4688]: I0123 18:28:41.967789 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.008106 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-h49m6"] Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.008517 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerName="dnsmasq-dns" containerID="cri-o://d972dcd52c935b6f7acfefba49d9be6e03ce424f41c4649ad2798ea305751bc1" gracePeriod=10 Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.069333 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 18:28:42 crc kubenswrapper[4688]: E0123 18:28:42.070657 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerName="init" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.070685 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerName="init" Jan 23 18:28:42 crc kubenswrapper[4688]: E0123 18:28:42.070707 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerName="dnsmasq-dns" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.070717 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerName="dnsmasq-dns" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.071007 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="74cd1c9b-f11b-40c9-a5b5-edc28c7c3c4d" containerName="dnsmasq-dns" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.075605 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.082777 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.083108 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jwjhs" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.083499 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.129648 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.150652 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc546\" (UniqueName: \"kubernetes.io/projected/264dbe04-0858-4997-b5be-deecf0c6f50e-kube-api-access-lc546\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.150800 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.150843 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.150980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config-secret\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.170334 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.253625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc546\" (UniqueName: \"kubernetes.io/projected/264dbe04-0858-4997-b5be-deecf0c6f50e-kube-api-access-lc546\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.253697 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.253723 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.253798 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config-secret\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.256918 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.284525 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.286791 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config-secret\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.286992 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc546\" (UniqueName: \"kubernetes.io/projected/264dbe04-0858-4997-b5be-deecf0c6f50e-kube-api-access-lc546\") pod \"openstackclient\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.406459 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.409313 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.441712 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.451773 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.454932 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.462464 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.561046 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5043fc78-cadf-4542-8673-2a02149409f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.561144 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5043fc78-cadf-4542-8673-2a02149409f9-openstack-config\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.561293 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5043fc78-cadf-4542-8673-2a02149409f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.561408 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvhp\" (UniqueName: \"kubernetes.io/projected/5043fc78-cadf-4542-8673-2a02149409f9-kube-api-access-wvvhp\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.663728 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5043fc78-cadf-4542-8673-2a02149409f9-openstack-config\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.663913 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5043fc78-cadf-4542-8673-2a02149409f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.664042 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvhp\" (UniqueName: \"kubernetes.io/projected/5043fc78-cadf-4542-8673-2a02149409f9-kube-api-access-wvvhp\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.664111 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5043fc78-cadf-4542-8673-2a02149409f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.665737 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5043fc78-cadf-4542-8673-2a02149409f9-openstack-config\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.672302 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5043fc78-cadf-4542-8673-2a02149409f9-openstack-config-secret\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.673100 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5043fc78-cadf-4542-8673-2a02149409f9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.685492 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvhp\" (UniqueName: \"kubernetes.io/projected/5043fc78-cadf-4542-8673-2a02149409f9-kube-api-access-wvvhp\") pod \"openstackclient\" (UID: \"5043fc78-cadf-4542-8673-2a02149409f9\") " pod="openstack/openstackclient" Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.713021 4688 generic.go:334] "Generic (PLEG): container finished" podID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerID="d972dcd52c935b6f7acfefba49d9be6e03ce424f41c4649ad2798ea305751bc1" exitCode=0 Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.713317 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="cinder-scheduler" containerID="cri-o://81d4a6e4607fea27af644d4e4ec40d0a062d7c83702b66497500bc57f011b18f" gracePeriod=30 Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.713727 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" event={"ID":"1cf9be80-df2a-4135-9203-d078ad33acf3","Type":"ContainerDied","Data":"d972dcd52c935b6f7acfefba49d9be6e03ce424f41c4649ad2798ea305751bc1"} Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.715012 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="probe" containerID="cri-o://28e40239fe23335b6d523c6121bb712cf79ee3a48f53a9ff08011a74cc1d2a6c" gracePeriod=30 Jan 23 18:28:42 crc kubenswrapper[4688]: I0123 18:28:42.791640 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:28:43 crc kubenswrapper[4688]: I0123 18:28:43.740935 4688 generic.go:334] "Generic (PLEG): container finished" podID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerID="28e40239fe23335b6d523c6121bb712cf79ee3a48f53a9ff08011a74cc1d2a6c" exitCode=0 Jan 23 18:28:43 crc kubenswrapper[4688]: I0123 18:28:43.741010 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f3d1bf6-3f81-478a-9b1b-704b8083ba41","Type":"ContainerDied","Data":"28e40239fe23335b6d523c6121bb712cf79ee3a48f53a9ff08011a74cc1d2a6c"} Jan 23 18:28:44 crc kubenswrapper[4688]: I0123 18:28:44.791279 4688 generic.go:334] "Generic (PLEG): container finished" podID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerID="81d4a6e4607fea27af644d4e4ec40d0a062d7c83702b66497500bc57f011b18f" exitCode=0 Jan 23 18:28:44 crc kubenswrapper[4688]: I0123 18:28:44.791330 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f3d1bf6-3f81-478a-9b1b-704b8083ba41","Type":"ContainerDied","Data":"81d4a6e4607fea27af644d4e4ec40d0a062d7c83702b66497500bc57f011b18f"} Jan 23 18:28:44 crc kubenswrapper[4688]: I0123 18:28:44.826851 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.152:5353: connect: connection refused" Jan 23 18:28:45 crc kubenswrapper[4688]: I0123 18:28:45.310961 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:45 crc kubenswrapper[4688]: I0123 18:28:45.628315 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9696bf65d-hqqnw" Jan 23 18:28:45 crc kubenswrapper[4688]: I0123 18:28:45.719399 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d568b8954-n7nkz"] Jan 23 18:28:45 crc kubenswrapper[4688]: I0123 18:28:45.719687 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d568b8954-n7nkz" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api-log" containerID="cri-o://5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15" gracePeriod=30 Jan 23 18:28:45 crc kubenswrapper[4688]: I0123 18:28:45.720301 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d568b8954-n7nkz" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api" containerID="cri-o://3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493" gracePeriod=30 Jan 23 18:28:45 crc kubenswrapper[4688]: I0123 18:28:45.817932 4688 generic.go:334] "Generic (PLEG): container finished" podID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerID="42281437f42a8d5076828069066a7ffe9c922cf065ae269f8b8dc978c1065d51" exitCode=0 Jan 23 18:28:45 crc kubenswrapper[4688]: I0123 18:28:45.818433 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74745fc86b-bp676" event={"ID":"424368f6-fce1-4e7d-b400-9554ec6a4fd3","Type":"ContainerDied","Data":"42281437f42a8d5076828069066a7ffe9c922cf065ae269f8b8dc978c1065d51"} Jan 23 18:28:46 crc kubenswrapper[4688]: I0123 18:28:46.284223 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 18:28:46 crc kubenswrapper[4688]: I0123 18:28:46.839585 4688 generic.go:334] "Generic (PLEG): container finished" podID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerID="5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15" exitCode=143 Jan 23 18:28:46 crc kubenswrapper[4688]: I0123 18:28:46.839898 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d568b8954-n7nkz" event={"ID":"3f552eda-6ccb-41a6-a9ee-47dc4350d3da","Type":"ContainerDied","Data":"5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15"} Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.335125 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.335748 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7q866,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cbbd26aa-7783-4958-95d0-a590f636947c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.336997 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.471706 4688 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 23 18:28:48 crc kubenswrapper[4688]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_264dbe04-0858-4997-b5be-deecf0c6f50e_0(7be47de8d456bde128122e80527b03ec5bfc164611cc0d859383ef5875703290): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7be47de8d456bde128122e80527b03ec5bfc164611cc0d859383ef5875703290" Netns:"/var/run/netns/ab8c9480-b981-459d-adfa-1f328820aebb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=7be47de8d456bde128122e80527b03ec5bfc164611cc0d859383ef5875703290;K8S_POD_UID=264dbe04-0858-4997-b5be-deecf0c6f50e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/264dbe04-0858-4997-b5be-deecf0c6f50e]: expected pod UID "264dbe04-0858-4997-b5be-deecf0c6f50e" but got "5043fc78-cadf-4542-8673-2a02149409f9" from Kube API Jan 23 18:28:48 crc kubenswrapper[4688]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 18:28:48 crc kubenswrapper[4688]: > Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.472093 4688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 23 18:28:48 crc kubenswrapper[4688]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_264dbe04-0858-4997-b5be-deecf0c6f50e_0(7be47de8d456bde128122e80527b03ec5bfc164611cc0d859383ef5875703290): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7be47de8d456bde128122e80527b03ec5bfc164611cc0d859383ef5875703290" Netns:"/var/run/netns/ab8c9480-b981-459d-adfa-1f328820aebb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=7be47de8d456bde128122e80527b03ec5bfc164611cc0d859383ef5875703290;K8S_POD_UID=264dbe04-0858-4997-b5be-deecf0c6f50e" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/264dbe04-0858-4997-b5be-deecf0c6f50e]: expected pod UID "264dbe04-0858-4997-b5be-deecf0c6f50e" but got "5043fc78-cadf-4542-8673-2a02149409f9" from Kube API Jan 23 18:28:48 crc kubenswrapper[4688]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 18:28:48 crc kubenswrapper[4688]: > pod="openstack/openstackclient" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.599925 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.614074 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.626287 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5c564cf675-l776t"] Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.628999 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data-custom\") pod \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.629215 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vw7m\" (UniqueName: \"kubernetes.io/projected/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-kube-api-access-5vw7m\") pod \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.637129 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="cinder-scheduler" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.637447 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="cinder-scheduler" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.641297 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-kube-api-access-5vw7m" (OuterVolumeSpecName: "kube-api-access-5vw7m") pod "6f3d1bf6-3f81-478a-9b1b-704b8083ba41" (UID: "6f3d1bf6-3f81-478a-9b1b-704b8083ba41"). InnerVolumeSpecName "kube-api-access-5vw7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.643278 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f3d1bf6-3f81-478a-9b1b-704b8083ba41" (UID: "6f3d1bf6-3f81-478a-9b1b-704b8083ba41"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.629282 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data\") pod \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.644418 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-combined-ca-bundle\") pod \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.644638 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-etc-machine-id\") pod \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.644715 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-scripts\") pod \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\" (UID: \"6f3d1bf6-3f81-478a-9b1b-704b8083ba41\") " Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.645399 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerName="dnsmasq-dns" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.645417 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerName="dnsmasq-dns" Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.645549 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerName="init" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.645558 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerName="init" Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.645757 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="probe" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.645868 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="probe" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.645938 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6f3d1bf6-3f81-478a-9b1b-704b8083ba41" (UID: "6f3d1bf6-3f81-478a-9b1b-704b8083ba41"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.646814 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.646837 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.646855 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vw7m\" (UniqueName: \"kubernetes.io/projected/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-kube-api-access-5vw7m\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.648304 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="probe" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.648343 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" containerName="cinder-scheduler" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.648368 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" containerName="dnsmasq-dns" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.649721 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.652046 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-httpd" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.652072 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-httpd" Jan 23 18:28:48 crc kubenswrapper[4688]: E0123 18:28:48.652118 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-api" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.652125 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-api" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.652542 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-api" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.652570 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" containerName="neutron-httpd" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.691993 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-scripts" (OuterVolumeSpecName: "scripts") pod "6f3d1bf6-3f81-478a-9b1b-704b8083ba41" (UID: "6f3d1bf6-3f81-478a-9b1b-704b8083ba41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.704376 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.716602 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.716628 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.716813 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.739364 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c564cf675-l776t"] Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750223 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-swift-storage-0\") pod \"1cf9be80-df2a-4135-9203-d078ad33acf3\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750295 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-httpd-config\") pod \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750344 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-svc\") pod \"1cf9be80-df2a-4135-9203-d078ad33acf3\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750463 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqn9x\" (UniqueName: \"kubernetes.io/projected/1cf9be80-df2a-4135-9203-d078ad33acf3-kube-api-access-vqn9x\") pod \"1cf9be80-df2a-4135-9203-d078ad33acf3\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750513 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-config\") pod \"1cf9be80-df2a-4135-9203-d078ad33acf3\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750596 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m5tz\" (UniqueName: \"kubernetes.io/projected/424368f6-fce1-4e7d-b400-9554ec6a4fd3-kube-api-access-9m5tz\") pod \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750619 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-nb\") pod \"1cf9be80-df2a-4135-9203-d078ad33acf3\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750694 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-ovndb-tls-certs\") pod \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750736 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-combined-ca-bundle\") pod \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750798 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-config\") pod \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\" (UID: \"424368f6-fce1-4e7d-b400-9554ec6a4fd3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.750823 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-sb\") pod \"1cf9be80-df2a-4135-9203-d078ad33acf3\" (UID: \"1cf9be80-df2a-4135-9203-d078ad33acf3\") " Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.751929 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-combined-ca-bundle\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752043 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmh7x\" (UniqueName: \"kubernetes.io/projected/8985e53c-d4f0-4f9a-96be-a540d7279676-kube-api-access-lmh7x\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752079 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-config-data\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752150 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8985e53c-d4f0-4f9a-96be-a540d7279676-run-httpd\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752204 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-internal-tls-certs\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752248 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8985e53c-d4f0-4f9a-96be-a540d7279676-etc-swift\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752333 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-public-tls-certs\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752537 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8985e53c-d4f0-4f9a-96be-a540d7279676-log-httpd\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.752730 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.780701 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "424368f6-fce1-4e7d-b400-9554ec6a4fd3" (UID: "424368f6-fce1-4e7d-b400-9554ec6a4fd3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.788068 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424368f6-fce1-4e7d-b400-9554ec6a4fd3-kube-api-access-9m5tz" (OuterVolumeSpecName: "kube-api-access-9m5tz") pod "424368f6-fce1-4e7d-b400-9554ec6a4fd3" (UID: "424368f6-fce1-4e7d-b400-9554ec6a4fd3"). InnerVolumeSpecName "kube-api-access-9m5tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.821917 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf9be80-df2a-4135-9203-d078ad33acf3-kube-api-access-vqn9x" (OuterVolumeSpecName: "kube-api-access-vqn9x") pod "1cf9be80-df2a-4135-9203-d078ad33acf3" (UID: "1cf9be80-df2a-4135-9203-d078ad33acf3"). InnerVolumeSpecName "kube-api-access-vqn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.855849 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8985e53c-d4f0-4f9a-96be-a540d7279676-log-httpd\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.856206 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-combined-ca-bundle\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.856372 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmh7x\" (UniqueName: \"kubernetes.io/projected/8985e53c-d4f0-4f9a-96be-a540d7279676-kube-api-access-lmh7x\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.856480 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-config-data\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.856625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8985e53c-d4f0-4f9a-96be-a540d7279676-run-httpd\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.856743 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-internal-tls-certs\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.856852 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8985e53c-d4f0-4f9a-96be-a540d7279676-etc-swift\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.857007 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-public-tls-certs\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.857822 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.857928 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqn9x\" (UniqueName: \"kubernetes.io/projected/1cf9be80-df2a-4135-9203-d078ad33acf3-kube-api-access-vqn9x\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.858009 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m5tz\" (UniqueName: \"kubernetes.io/projected/424368f6-fce1-4e7d-b400-9554ec6a4fd3-kube-api-access-9m5tz\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.861404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8985e53c-d4f0-4f9a-96be-a540d7279676-log-httpd\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.862061 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8985e53c-d4f0-4f9a-96be-a540d7279676-run-httpd\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.864797 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-internal-tls-certs\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.877484 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8985e53c-d4f0-4f9a-96be-a540d7279676-etc-swift\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.885109 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-public-tls-certs\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.891515 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.892548 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-h49m6" event={"ID":"1cf9be80-df2a-4135-9203-d078ad33acf3","Type":"ContainerDied","Data":"f223734ff0dd4d800dea567c65d4a106ab966c0cd0775f594de0cba2c37f543e"} Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.892589 4688 scope.go:117] "RemoveContainer" containerID="d972dcd52c935b6f7acfefba49d9be6e03ce424f41c4649ad2798ea305751bc1" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.896170 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-config-data\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.904553 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f3d1bf6-3f81-478a-9b1b-704b8083ba41" (UID: "6f3d1bf6-3f81-478a-9b1b-704b8083ba41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.905272 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8985e53c-d4f0-4f9a-96be-a540d7279676-combined-ca-bundle\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.911876 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74745fc86b-bp676" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.912408 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmh7x\" (UniqueName: \"kubernetes.io/projected/8985e53c-d4f0-4f9a-96be-a540d7279676-kube-api-access-lmh7x\") pod \"swift-proxy-5c564cf675-l776t\" (UID: \"8985e53c-d4f0-4f9a-96be-a540d7279676\") " pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.912477 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74745fc86b-bp676" event={"ID":"424368f6-fce1-4e7d-b400-9554ec6a4fd3","Type":"ContainerDied","Data":"c7d11a33a01ff52b5065b7262cd101b8929e668a4fb99474d4dbb76f30a152b6"} Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.918436 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="ceilometer-notification-agent" containerID="cri-o://64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1" gracePeriod=30 Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.918807 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="sg-core" containerID="cri-o://8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae" gracePeriod=30 Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.918978 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.934558 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d568b8954-n7nkz" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:47140->10.217.0.175:9311: read: connection reset by peer" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.934570 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d568b8954-n7nkz" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:47128->10.217.0.175:9311: read: connection reset by peer" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.919088 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.919047 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f3d1bf6-3f81-478a-9b1b-704b8083ba41","Type":"ContainerDied","Data":"ad695e4466eba20855a6ebebea1d45a72aa0485a59e1c32da1c3f9cbb884ca1d"} Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.944879 4688 scope.go:117] "RemoveContainer" containerID="a1c63aad83adb6ee4c94a456154d7990b526246afb1deaf8bcd78962b1eb292c" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.965375 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:48 crc kubenswrapper[4688]: I0123 18:28:48.987415 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="264dbe04-0858-4997-b5be-deecf0c6f50e" podUID="5043fc78-cadf-4542-8673-2a02149409f9" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.007764 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cf9be80-df2a-4135-9203-d078ad33acf3" (UID: "1cf9be80-df2a-4135-9203-d078ad33acf3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.029446 4688 scope.go:117] "RemoveContainer" containerID="e96ae46b2dd0b02798e0cca30a38caf8646f652cde0f2db376c5531fee3545a4" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.030621 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cf9be80-df2a-4135-9203-d078ad33acf3" (UID: "1cf9be80-df2a-4135-9203-d078ad33acf3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.037584 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.043353 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="264dbe04-0858-4997-b5be-deecf0c6f50e" podUID="5043fc78-cadf-4542-8673-2a02149409f9" Jan 23 18:28:49 crc kubenswrapper[4688]: E0123 18:28:49.058628 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f552eda_6ccb_41a6_a9ee_47dc4350d3da.slice/crio-conmon-3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f552eda_6ccb_41a6_a9ee_47dc4350d3da.slice/crio-3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.062513 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data" (OuterVolumeSpecName: "config-data") pod "6f3d1bf6-3f81-478a-9b1b-704b8083ba41" (UID: "6f3d1bf6-3f81-478a-9b1b-704b8083ba41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.065053 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cf9be80-df2a-4135-9203-d078ad33acf3" (UID: "1cf9be80-df2a-4135-9203-d078ad33acf3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.068363 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.068391 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.068402 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f3d1bf6-3f81-478a-9b1b-704b8083ba41-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.068413 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.087103 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-config" (OuterVolumeSpecName: "config") pod "1cf9be80-df2a-4135-9203-d078ad33acf3" (UID: "1cf9be80-df2a-4135-9203-d078ad33acf3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.122954 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.123923 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1cf9be80-df2a-4135-9203-d078ad33acf3" (UID: "1cf9be80-df2a-4135-9203-d078ad33acf3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.123942 4688 scope.go:117] "RemoveContainer" containerID="42281437f42a8d5076828069066a7ffe9c922cf065ae269f8b8dc978c1065d51" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.147348 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-config" (OuterVolumeSpecName: "config") pod "424368f6-fce1-4e7d-b400-9554ec6a4fd3" (UID: "424368f6-fce1-4e7d-b400-9554ec6a4fd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.169837 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc546\" (UniqueName: \"kubernetes.io/projected/264dbe04-0858-4997-b5be-deecf0c6f50e-kube-api-access-lc546\") pod \"264dbe04-0858-4997-b5be-deecf0c6f50e\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.169897 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config-secret\") pod \"264dbe04-0858-4997-b5be-deecf0c6f50e\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.169987 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config\") pod \"264dbe04-0858-4997-b5be-deecf0c6f50e\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.170034 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-combined-ca-bundle\") pod \"264dbe04-0858-4997-b5be-deecf0c6f50e\" (UID: \"264dbe04-0858-4997-b5be-deecf0c6f50e\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.171051 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.171079 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cf9be80-df2a-4135-9203-d078ad33acf3-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.171092 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.176592 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "264dbe04-0858-4997-b5be-deecf0c6f50e" (UID: "264dbe04-0858-4997-b5be-deecf0c6f50e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.179510 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "264dbe04-0858-4997-b5be-deecf0c6f50e" (UID: "264dbe04-0858-4997-b5be-deecf0c6f50e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.194421 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "264dbe04-0858-4997-b5be-deecf0c6f50e" (UID: "264dbe04-0858-4997-b5be-deecf0c6f50e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.201261 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264dbe04-0858-4997-b5be-deecf0c6f50e-kube-api-access-lc546" (OuterVolumeSpecName: "kube-api-access-lc546") pod "264dbe04-0858-4997-b5be-deecf0c6f50e" (UID: "264dbe04-0858-4997-b5be-deecf0c6f50e"). InnerVolumeSpecName "kube-api-access-lc546". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.203527 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "424368f6-fce1-4e7d-b400-9554ec6a4fd3" (UID: "424368f6-fce1-4e7d-b400-9554ec6a4fd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.204249 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.221537 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "424368f6-fce1-4e7d-b400-9554ec6a4fd3" (UID: "424368f6-fce1-4e7d-b400-9554ec6a4fd3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.229980 4688 scope.go:117] "RemoveContainer" containerID="28e40239fe23335b6d523c6121bb712cf79ee3a48f53a9ff08011a74cc1d2a6c" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.251922 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-h49m6"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.266072 4688 scope.go:117] "RemoveContainer" containerID="81d4a6e4607fea27af644d4e4ec40d0a062d7c83702b66497500bc57f011b18f" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.273482 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.273524 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.273537 4688 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.273549 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424368f6-fce1-4e7d-b400-9554ec6a4fd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.273562 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc546\" (UniqueName: \"kubernetes.io/projected/264dbe04-0858-4997-b5be-deecf0c6f50e-kube-api-access-lc546\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.273578 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/264dbe04-0858-4997-b5be-deecf0c6f50e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.294309 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-h49m6"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.352096 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.434265 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf9be80-df2a-4135-9203-d078ad33acf3" path="/var/lib/kubelet/pods/1cf9be80-df2a-4135-9203-d078ad33acf3/volumes" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.442349 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264dbe04-0858-4997-b5be-deecf0c6f50e" path="/var/lib/kubelet/pods/264dbe04-0858-4997-b5be-deecf0c6f50e/volumes" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.443565 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.443708 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.451442 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.451570 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.470622 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.590838 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.590953 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkp9\" (UniqueName: \"kubernetes.io/projected/cb86de93-e273-417f-8c60-8b6201635766-kube-api-access-xdkp9\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.591051 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb86de93-e273-417f-8c60-8b6201635766-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.591171 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.591226 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.591323 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.688211 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.693137 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.693234 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkp9\" (UniqueName: \"kubernetes.io/projected/cb86de93-e273-417f-8c60-8b6201635766-kube-api-access-xdkp9\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.693276 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb86de93-e273-417f-8c60-8b6201635766-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.693352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.693374 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.693434 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.694436 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb86de93-e273-417f-8c60-8b6201635766-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.705127 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.720559 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.744214 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.744215 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkp9\" (UniqueName: \"kubernetes.io/projected/cb86de93-e273-417f-8c60-8b6201635766-kube-api-access-xdkp9\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.744403 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb86de93-e273-417f-8c60-8b6201635766-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb86de93-e273-417f-8c60-8b6201635766\") " pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.749284 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74745fc86b-bp676"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.777520 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74745fc86b-bp676"] Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.794835 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2k4w\" (UniqueName: \"kubernetes.io/projected/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-kube-api-access-n2k4w\") pod \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.794943 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-logs\") pod \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.795046 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-combined-ca-bundle\") pod \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.795160 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data-custom\") pod \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.795318 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data\") pod \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\" (UID: \"3f552eda-6ccb-41a6-a9ee-47dc4350d3da\") " Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.806510 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-logs" (OuterVolumeSpecName: "logs") pod "3f552eda-6ccb-41a6-a9ee-47dc4350d3da" (UID: "3f552eda-6ccb-41a6-a9ee-47dc4350d3da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.820970 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f552eda-6ccb-41a6-a9ee-47dc4350d3da" (UID: "3f552eda-6ccb-41a6-a9ee-47dc4350d3da"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.831874 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-kube-api-access-n2k4w" (OuterVolumeSpecName: "kube-api-access-n2k4w") pod "3f552eda-6ccb-41a6-a9ee-47dc4350d3da" (UID: "3f552eda-6ccb-41a6-a9ee-47dc4350d3da"). InnerVolumeSpecName "kube-api-access-n2k4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.861364 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f552eda-6ccb-41a6-a9ee-47dc4350d3da" (UID: "3f552eda-6ccb-41a6-a9ee-47dc4350d3da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.897211 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data" (OuterVolumeSpecName: "config-data") pod "3f552eda-6ccb-41a6-a9ee-47dc4350d3da" (UID: "3f552eda-6ccb-41a6-a9ee-47dc4350d3da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.909861 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2k4w\" (UniqueName: \"kubernetes.io/projected/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-kube-api-access-n2k4w\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.909906 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.909916 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.909925 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.909967 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f552eda-6ccb-41a6-a9ee-47dc4350d3da-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.932720 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.939781 4688 generic.go:334] "Generic (PLEG): container finished" podID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerID="3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493" exitCode=0 Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.939839 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d568b8954-n7nkz" event={"ID":"3f552eda-6ccb-41a6-a9ee-47dc4350d3da","Type":"ContainerDied","Data":"3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493"} Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.939868 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d568b8954-n7nkz" event={"ID":"3f552eda-6ccb-41a6-a9ee-47dc4350d3da","Type":"ContainerDied","Data":"4a4481b2d4861f09b25225e0f3699291229de9ec594f5af0cf727b51917edeb4"} Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.939905 4688 scope.go:117] "RemoveContainer" containerID="3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.940001 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d568b8954-n7nkz" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.953559 4688 generic.go:334] "Generic (PLEG): container finished" podID="cbbd26aa-7783-4958-95d0-a590f636947c" containerID="8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae" exitCode=2 Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.953628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbbd26aa-7783-4958-95d0-a590f636947c","Type":"ContainerDied","Data":"8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae"} Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.955062 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.956380 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5043fc78-cadf-4542-8673-2a02149409f9","Type":"ContainerStarted","Data":"4947e709d07a1ea008a572045434eb97ea00ccb6c90db5d40bf8b66cca5c4f0e"} Jan 23 18:28:49 crc kubenswrapper[4688]: I0123 18:28:49.965686 4688 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="264dbe04-0858-4997-b5be-deecf0c6f50e" podUID="5043fc78-cadf-4542-8673-2a02149409f9" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.003436 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d568b8954-n7nkz"] Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.015373 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6d568b8954-n7nkz"] Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.018774 4688 scope.go:117] "RemoveContainer" containerID="5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.100607 4688 scope.go:117] "RemoveContainer" containerID="3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493" Jan 23 18:28:50 crc kubenswrapper[4688]: E0123 18:28:50.102235 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493\": container with ID starting with 3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493 not found: ID does not exist" containerID="3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.102294 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493"} err="failed to get container status \"3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493\": rpc error: code = NotFound desc = could not find container \"3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493\": container with ID starting with 3b75111754d8e3e4ef0040b4e7400040109af886ba66de0e5e0b62a5c3d77493 not found: ID does not exist" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.102324 4688 scope.go:117] "RemoveContainer" containerID="5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15" Jan 23 18:28:50 crc kubenswrapper[4688]: E0123 18:28:50.103753 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15\": container with ID starting with 5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15 not found: ID does not exist" containerID="5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.103790 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15"} err="failed to get container status \"5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15\": rpc error: code = NotFound desc = could not find container \"5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15\": container with ID starting with 5d59b24191803d2c35cba0ec3bd29380b2421d4c263715d564fa08125d505a15 not found: ID does not exist" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.246853 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5c564cf675-l776t"] Jan 23 18:28:50 crc kubenswrapper[4688]: W0123 18:28:50.257868 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8985e53c_d4f0_4f9a_96be_a540d7279676.slice/crio-b594d660008880a85da11932c82b1b4161393a3a77b18f4c4f1c4ea408a980ec WatchSource:0}: Error finding container b594d660008880a85da11932c82b1b4161393a3a77b18f4c4f1c4ea408a980ec: Status 404 returned error can't find the container with id b594d660008880a85da11932c82b1b4161393a3a77b18f4c4f1c4ea408a980ec Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.557148 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.754456 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.936211 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-log-httpd\") pod \"cbbd26aa-7783-4958-95d0-a590f636947c\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.936270 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-config-data\") pod \"cbbd26aa-7783-4958-95d0-a590f636947c\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.936297 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-scripts\") pod \"cbbd26aa-7783-4958-95d0-a590f636947c\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.936386 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-run-httpd\") pod \"cbbd26aa-7783-4958-95d0-a590f636947c\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.936496 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-combined-ca-bundle\") pod \"cbbd26aa-7783-4958-95d0-a590f636947c\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.936568 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q866\" (UniqueName: \"kubernetes.io/projected/cbbd26aa-7783-4958-95d0-a590f636947c-kube-api-access-7q866\") pod \"cbbd26aa-7783-4958-95d0-a590f636947c\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.936586 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-sg-core-conf-yaml\") pod \"cbbd26aa-7783-4958-95d0-a590f636947c\" (UID: \"cbbd26aa-7783-4958-95d0-a590f636947c\") " Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.938520 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cbbd26aa-7783-4958-95d0-a590f636947c" (UID: "cbbd26aa-7783-4958-95d0-a590f636947c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.939220 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cbbd26aa-7783-4958-95d0-a590f636947c" (UID: "cbbd26aa-7783-4958-95d0-a590f636947c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.943324 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbbd26aa-7783-4958-95d0-a590f636947c-kube-api-access-7q866" (OuterVolumeSpecName: "kube-api-access-7q866") pod "cbbd26aa-7783-4958-95d0-a590f636947c" (UID: "cbbd26aa-7783-4958-95d0-a590f636947c"). InnerVolumeSpecName "kube-api-access-7q866". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.944812 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-scripts" (OuterVolumeSpecName: "scripts") pod "cbbd26aa-7783-4958-95d0-a590f636947c" (UID: "cbbd26aa-7783-4958-95d0-a590f636947c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.980971 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-config-data" (OuterVolumeSpecName: "config-data") pod "cbbd26aa-7783-4958-95d0-a590f636947c" (UID: "cbbd26aa-7783-4958-95d0-a590f636947c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.996823 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbbd26aa-7783-4958-95d0-a590f636947c" (UID: "cbbd26aa-7783-4958-95d0-a590f636947c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.997633 4688 generic.go:334] "Generic (PLEG): container finished" podID="cbbd26aa-7783-4958-95d0-a590f636947c" containerID="64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1" exitCode=0 Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.997712 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbbd26aa-7783-4958-95d0-a590f636947c","Type":"ContainerDied","Data":"64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1"} Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.997748 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbbd26aa-7783-4958-95d0-a590f636947c","Type":"ContainerDied","Data":"e18fa56c4182bc56c5f2e3b93cd39faccc72cb4dda9a8840a61d65ce391928ec"} Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.997769 4688 scope.go:117] "RemoveContainer" containerID="8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae" Jan 23 18:28:50 crc kubenswrapper[4688]: I0123 18:28:50.997886 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.003155 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cbbd26aa-7783-4958-95d0-a590f636947c" (UID: "cbbd26aa-7783-4958-95d0-a590f636947c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.008140 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c564cf675-l776t" event={"ID":"8985e53c-d4f0-4f9a-96be-a540d7279676","Type":"ContainerStarted","Data":"aa189c93e16e8cad40782e9c1955c2947a9dd8645d36b66536d2c69a75f17cf6"} Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.008211 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c564cf675-l776t" event={"ID":"8985e53c-d4f0-4f9a-96be-a540d7279676","Type":"ContainerStarted","Data":"fa5fd55c070b8c77db95817e5d3c8756b1933784e8a36b99bb7cc5dcfa90f665"} Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.008227 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5c564cf675-l776t" event={"ID":"8985e53c-d4f0-4f9a-96be-a540d7279676","Type":"ContainerStarted","Data":"b594d660008880a85da11932c82b1b4161393a3a77b18f4c4f1c4ea408a980ec"} Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.008868 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.008913 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.010907 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb86de93-e273-417f-8c60-8b6201635766","Type":"ContainerStarted","Data":"47ef6d3ad9a6aca133eb66664d52436e909b184130401f520e1d875ca21a57ad"} Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.041709 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.041762 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.041781 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.041796 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.041807 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q866\" (UniqueName: \"kubernetes.io/projected/cbbd26aa-7783-4958-95d0-a590f636947c-kube-api-access-7q866\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.041816 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbbd26aa-7783-4958-95d0-a590f636947c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.041823 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbbd26aa-7783-4958-95d0-a590f636947c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.042086 4688 scope.go:117] "RemoveContainer" containerID="64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.043915 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5c564cf675-l776t" podStartSLOduration=3.043895004 podStartE2EDuration="3.043895004s" podCreationTimestamp="2026-01-23 18:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:51.037319254 +0000 UTC m=+1326.033143705" watchObservedRunningTime="2026-01-23 18:28:51.043895004 +0000 UTC m=+1326.039719455" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.081891 4688 scope.go:117] "RemoveContainer" containerID="8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae" Jan 23 18:28:51 crc kubenswrapper[4688]: E0123 18:28:51.084926 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae\": container with ID starting with 8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae not found: ID does not exist" containerID="8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.084988 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae"} err="failed to get container status \"8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae\": rpc error: code = NotFound desc = could not find container \"8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae\": container with ID starting with 8d1fb235324abd16a35b7ace40c39de622e4008def8df208b2882caf5fbbe3ae not found: ID does not exist" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.085026 4688 scope.go:117] "RemoveContainer" containerID="64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1" Jan 23 18:28:51 crc kubenswrapper[4688]: E0123 18:28:51.085558 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1\": container with ID starting with 64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1 not found: ID does not exist" containerID="64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.085618 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1"} err="failed to get container status \"64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1\": rpc error: code = NotFound desc = could not find container \"64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1\": container with ID starting with 64fb64ddd43a59a574e1ee4d8779fa5d9d3cc07b7747b2fc873ec792ea925db1 not found: ID does not exist" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.372895 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" path="/var/lib/kubelet/pods/3f552eda-6ccb-41a6-a9ee-47dc4350d3da/volumes" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.373808 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424368f6-fce1-4e7d-b400-9554ec6a4fd3" path="/var/lib/kubelet/pods/424368f6-fce1-4e7d-b400-9554ec6a4fd3/volumes" Jan 23 18:28:51 crc kubenswrapper[4688]: I0123 18:28:51.374506 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f3d1bf6-3f81-478a-9b1b-704b8083ba41" path="/var/lib/kubelet/pods/6f3d1bf6-3f81-478a-9b1b-704b8083ba41/volumes" Jan 23 18:28:52 crc kubenswrapper[4688]: I0123 18:28:52.029053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb86de93-e273-417f-8c60-8b6201635766","Type":"ContainerStarted","Data":"025a9e89db296428cd5ead9c2cd03368270962d1270be3b6bdef6d60bf0eeb18"} Jan 23 18:28:53 crc kubenswrapper[4688]: I0123 18:28:53.054243 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb86de93-e273-417f-8c60-8b6201635766","Type":"ContainerStarted","Data":"6fdf33c099523cbaca4da6b4a1650a826f5f75d207e4ee50180b3d6b84e39794"} Jan 23 18:28:53 crc kubenswrapper[4688]: I0123 18:28:53.122253 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.122230928 podStartE2EDuration="4.122230928s" podCreationTimestamp="2026-01-23 18:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:28:53.095551546 +0000 UTC m=+1328.091375997" watchObservedRunningTime="2026-01-23 18:28:53.122230928 +0000 UTC m=+1328.118055369" Jan 23 18:28:54 crc kubenswrapper[4688]: I0123 18:28:54.934453 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 18:28:59 crc kubenswrapper[4688]: I0123 18:28:59.124724 4688 generic.go:334] "Generic (PLEG): container finished" podID="56f27597-f638-4b6d-84e9-3a3671c089ac" containerID="1a81bd590df2aec524c3ec13233f98ebdf699927b59ed3118001e95b865fe0d3" exitCode=137 Jan 23 18:28:59 crc kubenswrapper[4688]: I0123 18:28:59.124788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689f6b4f86-pbwfh" event={"ID":"56f27597-f638-4b6d-84e9-3a3671c089ac","Type":"ContainerDied","Data":"1a81bd590df2aec524c3ec13233f98ebdf699927b59ed3118001e95b865fe0d3"} Jan 23 18:28:59 crc kubenswrapper[4688]: I0123 18:28:59.128515 4688 generic.go:334] "Generic (PLEG): container finished" podID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerID="edc9f72973727b10898539eabd6253423ace5c0db70c399aa7d84e12ce7541f6" exitCode=137 Jan 23 18:28:59 crc kubenswrapper[4688]: I0123 18:28:59.128559 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerDied","Data":"edc9f72973727b10898539eabd6253423ace5c0db70c399aa7d84e12ce7541f6"} Jan 23 18:28:59 crc kubenswrapper[4688]: I0123 18:28:59.138475 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:28:59 crc kubenswrapper[4688]: I0123 18:28:59.140008 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5c564cf675-l776t" Jan 23 18:29:00 crc kubenswrapper[4688]: I0123 18:29:00.406474 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 18:29:04 crc kubenswrapper[4688]: I0123 18:29:04.241151 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5043fc78-cadf-4542-8673-2a02149409f9","Type":"ContainerStarted","Data":"028e535de6e4d23e99d786d2baf616ebe616598ca920d12f2bab2c34886d0217"} Jan 23 18:29:04 crc kubenswrapper[4688]: I0123 18:29:04.245821 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-689f6b4f86-pbwfh" event={"ID":"56f27597-f638-4b6d-84e9-3a3671c089ac","Type":"ContainerStarted","Data":"7afe603c60015f7f196e4d81e64f55ed6a6f6eebc5c38493b1065b677a7d2dc4"} Jan 23 18:29:04 crc kubenswrapper[4688]: I0123 18:29:04.248463 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerStarted","Data":"334fe52ece4f91dd7ce55d73d8d16cb635250937aea94cfccd2aa29041b1f9e8"} Jan 23 18:29:04 crc kubenswrapper[4688]: I0123 18:29:04.262590 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=8.686180594 podStartE2EDuration="22.262565545s" podCreationTimestamp="2026-01-23 18:28:42 +0000 UTC" firstStartedPulling="2026-01-23 18:28:49.230774735 +0000 UTC m=+1324.226599176" lastFinishedPulling="2026-01-23 18:29:02.807159686 +0000 UTC m=+1337.802984127" observedRunningTime="2026-01-23 18:29:04.25961462 +0000 UTC m=+1339.255439061" watchObservedRunningTime="2026-01-23 18:29:04.262565545 +0000 UTC m=+1339.258389986" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.118063 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.295472 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data-custom\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.295661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsbkq\" (UniqueName: \"kubernetes.io/projected/74c198d0-1987-4562-9129-0df56fc666cb-kube-api-access-vsbkq\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.295699 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74c198d0-1987-4562-9129-0df56fc666cb-etc-machine-id\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.295777 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.295940 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-combined-ca-bundle\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.295978 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-scripts\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.296041 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c198d0-1987-4562-9129-0df56fc666cb-logs\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.296062 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74c198d0-1987-4562-9129-0df56fc666cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.296650 4688 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74c198d0-1987-4562-9129-0df56fc666cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.297124 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c198d0-1987-4562-9129-0df56fc666cb-logs" (OuterVolumeSpecName: "logs") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.301490 4688 generic.go:334] "Generic (PLEG): container finished" podID="74c198d0-1987-4562-9129-0df56fc666cb" containerID="24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79" exitCode=137 Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.301549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74c198d0-1987-4562-9129-0df56fc666cb","Type":"ContainerDied","Data":"24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79"} Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.301587 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"74c198d0-1987-4562-9129-0df56fc666cb","Type":"ContainerDied","Data":"d4d3cdde31884b2d9cb7711f1dfc0bc0f5b06f29ca1d781fe9f4b8930881eb96"} Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.301609 4688 scope.go:117] "RemoveContainer" containerID="24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.301857 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.308011 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.310203 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c198d0-1987-4562-9129-0df56fc666cb-kube-api-access-vsbkq" (OuterVolumeSpecName: "kube-api-access-vsbkq") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "kube-api-access-vsbkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.318421 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-scripts" (OuterVolumeSpecName: "scripts") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.337965 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.408603 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data" (OuterVolumeSpecName: "config-data") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.411680 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data\") pod \"74c198d0-1987-4562-9129-0df56fc666cb\" (UID: \"74c198d0-1987-4562-9129-0df56fc666cb\") " Jan 23 18:29:07 crc kubenswrapper[4688]: W0123 18:29:07.412558 4688 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/74c198d0-1987-4562-9129-0df56fc666cb/volumes/kubernetes.io~secret/config-data Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.412582 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data" (OuterVolumeSpecName: "config-data") pod "74c198d0-1987-4562-9129-0df56fc666cb" (UID: "74c198d0-1987-4562-9129-0df56fc666cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.413456 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.413486 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.413502 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.413514 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c198d0-1987-4562-9129-0df56fc666cb-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.413529 4688 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74c198d0-1987-4562-9129-0df56fc666cb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.413541 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsbkq\" (UniqueName: \"kubernetes.io/projected/74c198d0-1987-4562-9129-0df56fc666cb-kube-api-access-vsbkq\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.436857 4688 scope.go:117] "RemoveContainer" containerID="ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.456044 4688 scope.go:117] "RemoveContainer" containerID="24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79" Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.456687 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79\": container with ID starting with 24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79 not found: ID does not exist" containerID="24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.456737 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79"} err="failed to get container status \"24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79\": rpc error: code = NotFound desc = could not find container \"24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79\": container with ID starting with 24089462e54a05fbaf9d9688d7cbe84acfc66dfa1a0af928b820645e89723d79 not found: ID does not exist" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.456763 4688 scope.go:117] "RemoveContainer" containerID="ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b" Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.456989 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b\": container with ID starting with ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b not found: ID does not exist" containerID="ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.457003 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b"} err="failed to get container status \"ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b\": rpc error: code = NotFound desc = could not find container \"ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b\": container with ID starting with ddcee2a7c33ea5802376e05aad7a4f462da940146ac9878e25ce7a3aef457f3b not found: ID does not exist" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.642401 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.653387 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.698487 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.699053 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.699072 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api" Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.699092 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.699103 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api" Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.699117 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api-log" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.699125 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api-log" Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.699141 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="sg-core" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.699149 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="sg-core" Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.700298 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="ceilometer-notification-agent" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700314 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="ceilometer-notification-agent" Jan 23 18:29:07 crc kubenswrapper[4688]: E0123 18:29:07.700332 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api-log" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700338 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api-log" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700618 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api-log" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700633 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api-log" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700645 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="sg-core" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700656 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f552eda-6ccb-41a6-a9ee-47dc4350d3da" containerName="barbican-api" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700664 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" containerName="ceilometer-notification-agent" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.700673 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c198d0-1987-4562-9129-0df56fc666cb" containerName="cinder-api" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.701925 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.708936 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.709161 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.709473 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.719772 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-scripts\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.719822 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.719876 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.719895 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-logs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.719980 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.720022 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-config-data\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.720060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6fcf\" (UniqueName: \"kubernetes.io/projected/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-kube-api-access-r6fcf\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.720091 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.720133 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.727141 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.821896 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.821963 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-config-data\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.821998 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6fcf\" (UniqueName: \"kubernetes.io/projected/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-kube-api-access-r6fcf\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.822026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.822062 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.822092 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-scripts\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.822116 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.822159 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.822177 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-logs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.822741 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.823052 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-logs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.826932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-config-data\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.826954 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.827308 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-scripts\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.833001 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.833404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.836608 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:07 crc kubenswrapper[4688]: I0123 18:29:07.850872 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6fcf\" (UniqueName: \"kubernetes.io/projected/5d04ebda-89c7-4c9c-9d26-280a6d1598f8-kube-api-access-r6fcf\") pod \"cinder-api-0\" (UID: \"5d04ebda-89c7-4c9c-9d26-280a6d1598f8\") " pod="openstack/cinder-api-0" Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.093170 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.258577 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.260295 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.497785 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.498250 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.679449 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.838546 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:29:08 crc kubenswrapper[4688]: I0123 18:29:08.839025 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="1471c070-2a62-4080-95d8-4f60a523efaa" containerName="watcher-decision-engine" containerID="cri-o://f95bd7d962c5bfada63e3514a530a1139b422e0c58ae9d1e803f35f91a554f59" gracePeriod=30 Jan 23 18:29:09 crc kubenswrapper[4688]: I0123 18:29:09.397170 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c198d0-1987-4562-9129-0df56fc666cb" path="/var/lib/kubelet/pods/74c198d0-1987-4562-9129-0df56fc666cb/volumes" Jan 23 18:29:09 crc kubenswrapper[4688]: I0123 18:29:09.398200 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d04ebda-89c7-4c9c-9d26-280a6d1598f8","Type":"ContainerStarted","Data":"d98064f7b0cd5d20ec03b92f1cd8c6f166610f5c40f8c400ae04b396cceb7148"} Jan 23 18:29:10 crc kubenswrapper[4688]: I0123 18:29:10.390886 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d04ebda-89c7-4c9c-9d26-280a6d1598f8","Type":"ContainerStarted","Data":"7ad607cee7aea0e887999d8717b98f542c60a1c55713b9bd003df958f34a43fe"} Jan 23 18:29:11 crc kubenswrapper[4688]: I0123 18:29:11.403787 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d04ebda-89c7-4c9c-9d26-280a6d1598f8","Type":"ContainerStarted","Data":"062da43224e62e17845ebbeba2029445b9be21ff35664c53463c1ab0df936ed3"} Jan 23 18:29:11 crc kubenswrapper[4688]: I0123 18:29:11.404376 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.111175 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.111153552 podStartE2EDuration="8.111153552s" podCreationTimestamp="2026-01-23 18:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:29:11.433087773 +0000 UTC m=+1346.428912214" watchObservedRunningTime="2026-01-23 18:29:15.111153552 +0000 UTC m=+1350.106977993" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.118842 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pwmxl"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.120317 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.138725 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pwmxl"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.167648 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct8bz\" (UniqueName: \"kubernetes.io/projected/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-kube-api-access-ct8bz\") pod \"nova-api-db-create-pwmxl\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.167865 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-operator-scripts\") pod \"nova-api-db-create-pwmxl\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.269915 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-operator-scripts\") pod \"nova-api-db-create-pwmxl\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.270106 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct8bz\" (UniqueName: \"kubernetes.io/projected/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-kube-api-access-ct8bz\") pod \"nova-api-db-create-pwmxl\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.270756 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-operator-scripts\") pod \"nova-api-db-create-pwmxl\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.295579 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct8bz\" (UniqueName: \"kubernetes.io/projected/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-kube-api-access-ct8bz\") pod \"nova-api-db-create-pwmxl\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.333951 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7c8c4"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.335907 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.348924 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a7dd-account-create-update-wqjfn"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.350433 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.352371 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.372287 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a7dd-account-create-update-wqjfn"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.372286 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvnnq\" (UniqueName: \"kubernetes.io/projected/656e3bd1-7057-486b-aa8d-98df6462e588-kube-api-access-zvnnq\") pod \"nova-cell0-db-create-7c8c4\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.372395 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e3bd1-7057-486b-aa8d-98df6462e588-operator-scripts\") pod \"nova-cell0-db-create-7c8c4\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.420258 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7c8c4"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.434886 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-d5drx"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.437122 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.445252 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.449837 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-d5drx"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.476823 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b04947b-c624-4375-805e-43988d26b5aa-operator-scripts\") pod \"nova-cell1-db-create-d5drx\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.476886 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9v4p\" (UniqueName: \"kubernetes.io/projected/c823d536-422a-4bf8-9959-741070231ff4-kube-api-access-x9v4p\") pod \"nova-api-a7dd-account-create-update-wqjfn\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.476940 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c823d536-422a-4bf8-9959-741070231ff4-operator-scripts\") pod \"nova-api-a7dd-account-create-update-wqjfn\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.476980 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvnnq\" (UniqueName: \"kubernetes.io/projected/656e3bd1-7057-486b-aa8d-98df6462e588-kube-api-access-zvnnq\") pod \"nova-cell0-db-create-7c8c4\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.477011 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljrg\" (UniqueName: \"kubernetes.io/projected/7b04947b-c624-4375-805e-43988d26b5aa-kube-api-access-bljrg\") pod \"nova-cell1-db-create-d5drx\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.477040 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e3bd1-7057-486b-aa8d-98df6462e588-operator-scripts\") pod \"nova-cell0-db-create-7c8c4\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.479109 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e3bd1-7057-486b-aa8d-98df6462e588-operator-scripts\") pod \"nova-cell0-db-create-7c8c4\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.505672 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvnnq\" (UniqueName: \"kubernetes.io/projected/656e3bd1-7057-486b-aa8d-98df6462e588-kube-api-access-zvnnq\") pod \"nova-cell0-db-create-7c8c4\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.540328 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-a582-account-create-update-x7kt9"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.541880 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.546923 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.548814 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a582-account-create-update-x7kt9"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.578965 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b04947b-c624-4375-805e-43988d26b5aa-operator-scripts\") pod \"nova-cell1-db-create-d5drx\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.579021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9v4p\" (UniqueName: \"kubernetes.io/projected/c823d536-422a-4bf8-9959-741070231ff4-kube-api-access-x9v4p\") pod \"nova-api-a7dd-account-create-update-wqjfn\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.579078 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c823d536-422a-4bf8-9959-741070231ff4-operator-scripts\") pod \"nova-api-a7dd-account-create-update-wqjfn\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.579119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb41621-9757-493e-8164-6822693e8106-operator-scripts\") pod \"nova-cell0-a582-account-create-update-x7kt9\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.579157 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bljrg\" (UniqueName: \"kubernetes.io/projected/7b04947b-c624-4375-805e-43988d26b5aa-kube-api-access-bljrg\") pod \"nova-cell1-db-create-d5drx\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.579234 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27q6\" (UniqueName: \"kubernetes.io/projected/9cb41621-9757-493e-8164-6822693e8106-kube-api-access-m27q6\") pod \"nova-cell0-a582-account-create-update-x7kt9\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.584133 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c823d536-422a-4bf8-9959-741070231ff4-operator-scripts\") pod \"nova-api-a7dd-account-create-update-wqjfn\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.588976 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b04947b-c624-4375-805e-43988d26b5aa-operator-scripts\") pod \"nova-cell1-db-create-d5drx\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.620116 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9v4p\" (UniqueName: \"kubernetes.io/projected/c823d536-422a-4bf8-9959-741070231ff4-kube-api-access-x9v4p\") pod \"nova-api-a7dd-account-create-update-wqjfn\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.620648 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bljrg\" (UniqueName: \"kubernetes.io/projected/7b04947b-c624-4375-805e-43988d26b5aa-kube-api-access-bljrg\") pod \"nova-cell1-db-create-d5drx\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.682598 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb41621-9757-493e-8164-6822693e8106-operator-scripts\") pod \"nova-cell0-a582-account-create-update-x7kt9\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.682706 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m27q6\" (UniqueName: \"kubernetes.io/projected/9cb41621-9757-493e-8164-6822693e8106-kube-api-access-m27q6\") pod \"nova-cell0-a582-account-create-update-x7kt9\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.684100 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb41621-9757-493e-8164-6822693e8106-operator-scripts\") pod \"nova-cell0-a582-account-create-update-x7kt9\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.686248 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.698722 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.729604 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m27q6\" (UniqueName: \"kubernetes.io/projected/9cb41621-9757-493e-8164-6822693e8106-kube-api-access-m27q6\") pod \"nova-cell0-a582-account-create-update-x7kt9\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.748453 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.796041 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.838267 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9d4c-account-create-update-cl2rb"] Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.839920 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.875797 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 18:29:15 crc kubenswrapper[4688]: I0123 18:29:15.893390 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9d4c-account-create-update-cl2rb"] Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.003556 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb07c9fd-8a23-4726-8825-2c877f74f27c-operator-scripts\") pod \"nova-cell1-9d4c-account-create-update-cl2rb\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.003689 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6s5s\" (UniqueName: \"kubernetes.io/projected/bb07c9fd-8a23-4726-8825-2c877f74f27c-kube-api-access-z6s5s\") pod \"nova-cell1-9d4c-account-create-update-cl2rb\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.110148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb07c9fd-8a23-4726-8825-2c877f74f27c-operator-scripts\") pod \"nova-cell1-9d4c-account-create-update-cl2rb\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.110608 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6s5s\" (UniqueName: \"kubernetes.io/projected/bb07c9fd-8a23-4726-8825-2c877f74f27c-kube-api-access-z6s5s\") pod \"nova-cell1-9d4c-account-create-update-cl2rb\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.112426 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb07c9fd-8a23-4726-8825-2c877f74f27c-operator-scripts\") pod \"nova-cell1-9d4c-account-create-update-cl2rb\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.143485 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6s5s\" (UniqueName: \"kubernetes.io/projected/bb07c9fd-8a23-4726-8825-2c877f74f27c-kube-api-access-z6s5s\") pod \"nova-cell1-9d4c-account-create-update-cl2rb\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.288887 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pwmxl"] Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.325385 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.481280 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pwmxl" event={"ID":"39ce47c8-d819-4faf-822d-7aa80bd1eb9d","Type":"ContainerStarted","Data":"ce69c9c0808f5afd4ca96388c5b706fba585158c04cd462592414ecf372c4bb6"} Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.651715 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7c8c4"] Jan 23 18:29:16 crc kubenswrapper[4688]: W0123 18:29:16.657800 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod656e3bd1_7057_486b_aa8d_98df6462e588.slice/crio-f408d815a09361440115692605cb34c641c41089879dedc5f866e42ecece8513 WatchSource:0}: Error finding container f408d815a09361440115692605cb34c641c41089879dedc5f866e42ecece8513: Status 404 returned error can't find the container with id f408d815a09361440115692605cb34c641c41089879dedc5f866e42ecece8513 Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.729509 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-a582-account-create-update-x7kt9"] Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.891576 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a7dd-account-create-update-wqjfn"] Jan 23 18:29:16 crc kubenswrapper[4688]: I0123 18:29:16.916217 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-d5drx"] Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.097423 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9d4c-account-create-update-cl2rb"] Jan 23 18:29:17 crc kubenswrapper[4688]: W0123 18:29:17.132128 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb07c9fd_8a23_4726_8825_2c877f74f27c.slice/crio-46e83d3e227f797c38c01b2ade3f2c9ddc1d66eb08fe1da1a819ea2d805cfb4f WatchSource:0}: Error finding container 46e83d3e227f797c38c01b2ade3f2c9ddc1d66eb08fe1da1a819ea2d805cfb4f: Status 404 returned error can't find the container with id 46e83d3e227f797c38c01b2ade3f2c9ddc1d66eb08fe1da1a819ea2d805cfb4f Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.525732 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" event={"ID":"c823d536-422a-4bf8-9959-741070231ff4","Type":"ContainerStarted","Data":"57f8e4fb6d6d1022c1d0c71f0755b0505fe00ad90da59ce0779622b7b336a835"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.526105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" event={"ID":"c823d536-422a-4bf8-9959-741070231ff4","Type":"ContainerStarted","Data":"caf49d307de7bfe152045b31dd33cd1623ff28ee5ee9231b9eb39e4d67d2d590"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.546140 4688 generic.go:334] "Generic (PLEG): container finished" podID="39ce47c8-d819-4faf-822d-7aa80bd1eb9d" containerID="a2694b135b404918ecd9d4e96a9fbd15bc53c646e7e3293c9f7f838a9c52f5af" exitCode=0 Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.546291 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pwmxl" event={"ID":"39ce47c8-d819-4faf-822d-7aa80bd1eb9d","Type":"ContainerDied","Data":"a2694b135b404918ecd9d4e96a9fbd15bc53c646e7e3293c9f7f838a9c52f5af"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.552591 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" podStartSLOduration=2.552561891 podStartE2EDuration="2.552561891s" podCreationTimestamp="2026-01-23 18:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:29:17.547403062 +0000 UTC m=+1352.543227503" watchObservedRunningTime="2026-01-23 18:29:17.552561891 +0000 UTC m=+1352.548386332" Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.574810 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-d5drx" event={"ID":"7b04947b-c624-4375-805e-43988d26b5aa","Type":"ContainerStarted","Data":"6f96f2954d89021983ed9dbc411c1f7cd6b04f11c06146aa6000ea329be5f3b6"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.574873 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-d5drx" event={"ID":"7b04947b-c624-4375-805e-43988d26b5aa","Type":"ContainerStarted","Data":"3785f2e436b847b62bf583502e23541e28d2533c865939b847ff2ddd44560c05"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.589622 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" event={"ID":"9cb41621-9757-493e-8164-6822693e8106","Type":"ContainerStarted","Data":"4f25ed3ddb32ffa15900d1526c4e010ca3e8ccff8dd2e77dfb1dd697f8900004"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.589669 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" event={"ID":"9cb41621-9757-493e-8164-6822693e8106","Type":"ContainerStarted","Data":"94c2ffece55d81352d3b8ec20262a4a7dc6adb9d1fa3fdf944357dbd70c7c7c8"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.596482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" event={"ID":"bb07c9fd-8a23-4726-8825-2c877f74f27c","Type":"ContainerStarted","Data":"46e83d3e227f797c38c01b2ade3f2c9ddc1d66eb08fe1da1a819ea2d805cfb4f"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.605641 4688 generic.go:334] "Generic (PLEG): container finished" podID="656e3bd1-7057-486b-aa8d-98df6462e588" containerID="a68d12fae4044a7655dce5abfa8a7f9dd42de20e2d3c53afc1f43d604e4f93fe" exitCode=0 Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.605693 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7c8c4" event={"ID":"656e3bd1-7057-486b-aa8d-98df6462e588","Type":"ContainerDied","Data":"a68d12fae4044a7655dce5abfa8a7f9dd42de20e2d3c53afc1f43d604e4f93fe"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.605719 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7c8c4" event={"ID":"656e3bd1-7057-486b-aa8d-98df6462e588","Type":"ContainerStarted","Data":"f408d815a09361440115692605cb34c641c41089879dedc5f866e42ecece8513"} Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.603870 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-d5drx" podStartSLOduration=2.603847275 podStartE2EDuration="2.603847275s" podCreationTimestamp="2026-01-23 18:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:29:17.603049892 +0000 UTC m=+1352.598874333" watchObservedRunningTime="2026-01-23 18:29:17.603847275 +0000 UTC m=+1352.599671716" Jan 23 18:29:17 crc kubenswrapper[4688]: I0123 18:29:17.631985 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" podStartSLOduration=2.631960279 podStartE2EDuration="2.631960279s" podCreationTimestamp="2026-01-23 18:29:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:29:17.616981455 +0000 UTC m=+1352.612805896" watchObservedRunningTime="2026-01-23 18:29:17.631960279 +0000 UTC m=+1352.627784720" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.260053 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.500535 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689f6b4f86-pbwfh" podUID="56f27597-f638-4b6d-84e9-3a3671c089ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.618946 4688 generic.go:334] "Generic (PLEG): container finished" podID="bb07c9fd-8a23-4726-8825-2c877f74f27c" containerID="5e36a4be3644b921e129cae4d97dbf336555ebef4a59317104608a4c070ecde2" exitCode=0 Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.619074 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" event={"ID":"bb07c9fd-8a23-4726-8825-2c877f74f27c","Type":"ContainerDied","Data":"5e36a4be3644b921e129cae4d97dbf336555ebef4a59317104608a4c070ecde2"} Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.626874 4688 generic.go:334] "Generic (PLEG): container finished" podID="1471c070-2a62-4080-95d8-4f60a523efaa" containerID="f95bd7d962c5bfada63e3514a530a1139b422e0c58ae9d1e803f35f91a554f59" exitCode=0 Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.626972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1471c070-2a62-4080-95d8-4f60a523efaa","Type":"ContainerDied","Data":"f95bd7d962c5bfada63e3514a530a1139b422e0c58ae9d1e803f35f91a554f59"} Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.627025 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1471c070-2a62-4080-95d8-4f60a523efaa","Type":"ContainerDied","Data":"9255244dec4d66893047215440240046648715208803df10420601cb7746ebf6"} Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.627040 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9255244dec4d66893047215440240046648715208803df10420601cb7746ebf6" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.629268 4688 generic.go:334] "Generic (PLEG): container finished" podID="c823d536-422a-4bf8-9959-741070231ff4" containerID="57f8e4fb6d6d1022c1d0c71f0755b0505fe00ad90da59ce0779622b7b336a835" exitCode=0 Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.629333 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" event={"ID":"c823d536-422a-4bf8-9959-741070231ff4","Type":"ContainerDied","Data":"57f8e4fb6d6d1022c1d0c71f0755b0505fe00ad90da59ce0779622b7b336a835"} Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.631637 4688 generic.go:334] "Generic (PLEG): container finished" podID="7b04947b-c624-4375-805e-43988d26b5aa" containerID="6f96f2954d89021983ed9dbc411c1f7cd6b04f11c06146aa6000ea329be5f3b6" exitCode=0 Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.631698 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-d5drx" event={"ID":"7b04947b-c624-4375-805e-43988d26b5aa","Type":"ContainerDied","Data":"6f96f2954d89021983ed9dbc411c1f7cd6b04f11c06146aa6000ea329be5f3b6"} Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.640841 4688 generic.go:334] "Generic (PLEG): container finished" podID="9cb41621-9757-493e-8164-6822693e8106" containerID="4f25ed3ddb32ffa15900d1526c4e010ca3e8ccff8dd2e77dfb1dd697f8900004" exitCode=0 Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.641270 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" event={"ID":"9cb41621-9757-493e-8164-6822693e8106","Type":"ContainerDied","Data":"4f25ed3ddb32ffa15900d1526c4e010ca3e8ccff8dd2e77dfb1dd697f8900004"} Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.718529 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.899769 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-combined-ca-bundle\") pod \"1471c070-2a62-4080-95d8-4f60a523efaa\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.899945 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-custom-prometheus-ca\") pod \"1471c070-2a62-4080-95d8-4f60a523efaa\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.900079 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-config-data\") pod \"1471c070-2a62-4080-95d8-4f60a523efaa\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.900150 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1471c070-2a62-4080-95d8-4f60a523efaa-logs\") pod \"1471c070-2a62-4080-95d8-4f60a523efaa\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.900307 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wl57\" (UniqueName: \"kubernetes.io/projected/1471c070-2a62-4080-95d8-4f60a523efaa-kube-api-access-2wl57\") pod \"1471c070-2a62-4080-95d8-4f60a523efaa\" (UID: \"1471c070-2a62-4080-95d8-4f60a523efaa\") " Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.905403 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1471c070-2a62-4080-95d8-4f60a523efaa-logs" (OuterVolumeSpecName: "logs") pod "1471c070-2a62-4080-95d8-4f60a523efaa" (UID: "1471c070-2a62-4080-95d8-4f60a523efaa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.928857 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1471c070-2a62-4080-95d8-4f60a523efaa-kube-api-access-2wl57" (OuterVolumeSpecName: "kube-api-access-2wl57") pod "1471c070-2a62-4080-95d8-4f60a523efaa" (UID: "1471c070-2a62-4080-95d8-4f60a523efaa"). InnerVolumeSpecName "kube-api-access-2wl57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.956328 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1471c070-2a62-4080-95d8-4f60a523efaa" (UID: "1471c070-2a62-4080-95d8-4f60a523efaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.979631 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1471c070-2a62-4080-95d8-4f60a523efaa" (UID: "1471c070-2a62-4080-95d8-4f60a523efaa"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:18 crc kubenswrapper[4688]: I0123 18:29:18.997276 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-config-data" (OuterVolumeSpecName: "config-data") pod "1471c070-2a62-4080-95d8-4f60a523efaa" (UID: "1471c070-2a62-4080-95d8-4f60a523efaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.009110 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.009162 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1471c070-2a62-4080-95d8-4f60a523efaa-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.009178 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wl57\" (UniqueName: \"kubernetes.io/projected/1471c070-2a62-4080-95d8-4f60a523efaa-kube-api-access-2wl57\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.009205 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.009217 4688 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1471c070-2a62-4080-95d8-4f60a523efaa-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.285742 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.424151 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e3bd1-7057-486b-aa8d-98df6462e588-operator-scripts\") pod \"656e3bd1-7057-486b-aa8d-98df6462e588\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.424651 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvnnq\" (UniqueName: \"kubernetes.io/projected/656e3bd1-7057-486b-aa8d-98df6462e588-kube-api-access-zvnnq\") pod \"656e3bd1-7057-486b-aa8d-98df6462e588\" (UID: \"656e3bd1-7057-486b-aa8d-98df6462e588\") " Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.435594 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656e3bd1-7057-486b-aa8d-98df6462e588-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "656e3bd1-7057-486b-aa8d-98df6462e588" (UID: "656e3bd1-7057-486b-aa8d-98df6462e588"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.438415 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656e3bd1-7057-486b-aa8d-98df6462e588-kube-api-access-zvnnq" (OuterVolumeSpecName: "kube-api-access-zvnnq") pod "656e3bd1-7057-486b-aa8d-98df6462e588" (UID: "656e3bd1-7057-486b-aa8d-98df6462e588"). InnerVolumeSpecName "kube-api-access-zvnnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.529209 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/656e3bd1-7057-486b-aa8d-98df6462e588-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.529249 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvnnq\" (UniqueName: \"kubernetes.io/projected/656e3bd1-7057-486b-aa8d-98df6462e588-kube-api-access-zvnnq\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.530243 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.632896 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-operator-scripts\") pod \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.633018 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct8bz\" (UniqueName: \"kubernetes.io/projected/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-kube-api-access-ct8bz\") pod \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\" (UID: \"39ce47c8-d819-4faf-822d-7aa80bd1eb9d\") " Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.634680 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39ce47c8-d819-4faf-822d-7aa80bd1eb9d" (UID: "39ce47c8-d819-4faf-822d-7aa80bd1eb9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.645470 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-kube-api-access-ct8bz" (OuterVolumeSpecName: "kube-api-access-ct8bz") pod "39ce47c8-d819-4faf-822d-7aa80bd1eb9d" (UID: "39ce47c8-d819-4faf-822d-7aa80bd1eb9d"). InnerVolumeSpecName "kube-api-access-ct8bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.699868 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7c8c4" event={"ID":"656e3bd1-7057-486b-aa8d-98df6462e588","Type":"ContainerDied","Data":"f408d815a09361440115692605cb34c641c41089879dedc5f866e42ecece8513"} Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.699912 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7c8c4" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.699917 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f408d815a09361440115692605cb34c641c41089879dedc5f866e42ecece8513" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.709330 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.709434 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pwmxl" event={"ID":"39ce47c8-d819-4faf-822d-7aa80bd1eb9d","Type":"ContainerDied","Data":"ce69c9c0808f5afd4ca96388c5b706fba585158c04cd462592414ecf372c4bb6"} Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.709469 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce69c9c0808f5afd4ca96388c5b706fba585158c04cd462592414ecf372c4bb6" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.709492 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pwmxl" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.746850 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.746906 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct8bz\" (UniqueName: \"kubernetes.io/projected/39ce47c8-d819-4faf-822d-7aa80bd1eb9d-kube-api-access-ct8bz\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.770094 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.783731 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.798219 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:29:19 crc kubenswrapper[4688]: E0123 18:29:19.798826 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ce47c8-d819-4faf-822d-7aa80bd1eb9d" containerName="mariadb-database-create" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.798855 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ce47c8-d819-4faf-822d-7aa80bd1eb9d" containerName="mariadb-database-create" Jan 23 18:29:19 crc kubenswrapper[4688]: E0123 18:29:19.798876 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1471c070-2a62-4080-95d8-4f60a523efaa" containerName="watcher-decision-engine" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.798886 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1471c070-2a62-4080-95d8-4f60a523efaa" containerName="watcher-decision-engine" Jan 23 18:29:19 crc kubenswrapper[4688]: E0123 18:29:19.798904 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656e3bd1-7057-486b-aa8d-98df6462e588" containerName="mariadb-database-create" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.798911 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="656e3bd1-7057-486b-aa8d-98df6462e588" containerName="mariadb-database-create" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.799127 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1471c070-2a62-4080-95d8-4f60a523efaa" containerName="watcher-decision-engine" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.799154 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="656e3bd1-7057-486b-aa8d-98df6462e588" containerName="mariadb-database-create" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.799167 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ce47c8-d819-4faf-822d-7aa80bd1eb9d" containerName="mariadb-database-create" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.799962 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.820552 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.830505 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.950731 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.950824 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.950889 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e25f1cb-df6e-441a-ba49-b8de51d05434-logs\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.950967 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:19 crc kubenswrapper[4688]: I0123 18:29:19.951011 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpkgx\" (UniqueName: \"kubernetes.io/projected/0e25f1cb-df6e-441a-ba49-b8de51d05434-kube-api-access-cpkgx\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.053428 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.053844 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpkgx\" (UniqueName: \"kubernetes.io/projected/0e25f1cb-df6e-441a-ba49-b8de51d05434-kube-api-access-cpkgx\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.053900 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.054018 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.054091 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e25f1cb-df6e-441a-ba49-b8de51d05434-logs\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.054540 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e25f1cb-df6e-441a-ba49-b8de51d05434-logs\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.062238 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.092982 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.094078 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpkgx\" (UniqueName: \"kubernetes.io/projected/0e25f1cb-df6e-441a-ba49-b8de51d05434-kube-api-access-cpkgx\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.095888 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0e25f1cb-df6e-441a-ba49-b8de51d05434-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0e25f1cb-df6e-441a-ba49-b8de51d05434\") " pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.130873 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.412720 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.571364 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b04947b-c624-4375-805e-43988d26b5aa-operator-scripts\") pod \"7b04947b-c624-4375-805e-43988d26b5aa\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.571453 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bljrg\" (UniqueName: \"kubernetes.io/projected/7b04947b-c624-4375-805e-43988d26b5aa-kube-api-access-bljrg\") pod \"7b04947b-c624-4375-805e-43988d26b5aa\" (UID: \"7b04947b-c624-4375-805e-43988d26b5aa\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.573179 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b04947b-c624-4375-805e-43988d26b5aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b04947b-c624-4375-805e-43988d26b5aa" (UID: "7b04947b-c624-4375-805e-43988d26b5aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.584697 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.592498 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b04947b-c624-4375-805e-43988d26b5aa-kube-api-access-bljrg" (OuterVolumeSpecName: "kube-api-access-bljrg") pod "7b04947b-c624-4375-805e-43988d26b5aa" (UID: "7b04947b-c624-4375-805e-43988d26b5aa"). InnerVolumeSpecName "kube-api-access-bljrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.674576 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b04947b-c624-4375-805e-43988d26b5aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.674623 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bljrg\" (UniqueName: \"kubernetes.io/projected/7b04947b-c624-4375-805e-43988d26b5aa-kube-api-access-bljrg\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.769141 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-d5drx" event={"ID":"7b04947b-c624-4375-805e-43988d26b5aa","Type":"ContainerDied","Data":"3785f2e436b847b62bf583502e23541e28d2533c865939b847ff2ddd44560c05"} Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.769428 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3785f2e436b847b62bf583502e23541e28d2533c865939b847ff2ddd44560c05" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.769516 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-d5drx" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.775812 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9v4p\" (UniqueName: \"kubernetes.io/projected/c823d536-422a-4bf8-9959-741070231ff4-kube-api-access-x9v4p\") pod \"c823d536-422a-4bf8-9959-741070231ff4\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.776671 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c823d536-422a-4bf8-9959-741070231ff4-operator-scripts\") pod \"c823d536-422a-4bf8-9959-741070231ff4\" (UID: \"c823d536-422a-4bf8-9959-741070231ff4\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.777148 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c823d536-422a-4bf8-9959-741070231ff4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c823d536-422a-4bf8-9959-741070231ff4" (UID: "c823d536-422a-4bf8-9959-741070231ff4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.777868 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c823d536-422a-4bf8-9959-741070231ff4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.792113 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.792108 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a7dd-account-create-update-wqjfn" event={"ID":"c823d536-422a-4bf8-9959-741070231ff4","Type":"ContainerDied","Data":"caf49d307de7bfe152045b31dd33cd1623ff28ee5ee9231b9eb39e4d67d2d590"} Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.792722 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf49d307de7bfe152045b31dd33cd1623ff28ee5ee9231b9eb39e4d67d2d590" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.792969 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c823d536-422a-4bf8-9959-741070231ff4-kube-api-access-x9v4p" (OuterVolumeSpecName: "kube-api-access-x9v4p") pod "c823d536-422a-4bf8-9959-741070231ff4" (UID: "c823d536-422a-4bf8-9959-741070231ff4"). InnerVolumeSpecName "kube-api-access-x9v4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.798355 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.806627 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" event={"ID":"9cb41621-9757-493e-8164-6822693e8106","Type":"ContainerDied","Data":"94c2ffece55d81352d3b8ec20262a4a7dc6adb9d1fa3fdf944357dbd70c7c7c8"} Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.806942 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c2ffece55d81352d3b8ec20262a4a7dc6adb9d1fa3fdf944357dbd70c7c7c8" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.828582 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" event={"ID":"bb07c9fd-8a23-4726-8825-2c877f74f27c","Type":"ContainerDied","Data":"46e83d3e227f797c38c01b2ade3f2c9ddc1d66eb08fe1da1a819ea2d805cfb4f"} Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.828879 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46e83d3e227f797c38c01b2ade3f2c9ddc1d66eb08fe1da1a819ea2d805cfb4f" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.828846 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9d4c-account-create-update-cl2rb" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.850819 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.875342 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.882160 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9v4p\" (UniqueName: \"kubernetes.io/projected/c823d536-422a-4bf8-9959-741070231ff4-kube-api-access-x9v4p\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.983906 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6s5s\" (UniqueName: \"kubernetes.io/projected/bb07c9fd-8a23-4726-8825-2c877f74f27c-kube-api-access-z6s5s\") pod \"bb07c9fd-8a23-4726-8825-2c877f74f27c\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.984068 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb41621-9757-493e-8164-6822693e8106-operator-scripts\") pod \"9cb41621-9757-493e-8164-6822693e8106\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.984144 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb07c9fd-8a23-4726-8825-2c877f74f27c-operator-scripts\") pod \"bb07c9fd-8a23-4726-8825-2c877f74f27c\" (UID: \"bb07c9fd-8a23-4726-8825-2c877f74f27c\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.984167 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m27q6\" (UniqueName: \"kubernetes.io/projected/9cb41621-9757-493e-8164-6822693e8106-kube-api-access-m27q6\") pod \"9cb41621-9757-493e-8164-6822693e8106\" (UID: \"9cb41621-9757-493e-8164-6822693e8106\") " Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.984685 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb41621-9757-493e-8164-6822693e8106-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cb41621-9757-493e-8164-6822693e8106" (UID: "9cb41621-9757-493e-8164-6822693e8106"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.984854 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cb41621-9757-493e-8164-6822693e8106-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.985434 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb07c9fd-8a23-4726-8825-2c877f74f27c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb07c9fd-8a23-4726-8825-2c877f74f27c" (UID: "bb07c9fd-8a23-4726-8825-2c877f74f27c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.989495 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb41621-9757-493e-8164-6822693e8106-kube-api-access-m27q6" (OuterVolumeSpecName: "kube-api-access-m27q6") pod "9cb41621-9757-493e-8164-6822693e8106" (UID: "9cb41621-9757-493e-8164-6822693e8106"). InnerVolumeSpecName "kube-api-access-m27q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:20 crc kubenswrapper[4688]: I0123 18:29:20.993370 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb07c9fd-8a23-4726-8825-2c877f74f27c-kube-api-access-z6s5s" (OuterVolumeSpecName: "kube-api-access-z6s5s") pod "bb07c9fd-8a23-4726-8825-2c877f74f27c" (UID: "bb07c9fd-8a23-4726-8825-2c877f74f27c"). InnerVolumeSpecName "kube-api-access-z6s5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.087464 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6s5s\" (UniqueName: \"kubernetes.io/projected/bb07c9fd-8a23-4726-8825-2c877f74f27c-kube-api-access-z6s5s\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.087512 4688 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb07c9fd-8a23-4726-8825-2c877f74f27c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.087530 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m27q6\" (UniqueName: \"kubernetes.io/projected/9cb41621-9757-493e-8164-6822693e8106-kube-api-access-m27q6\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.391361 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1471c070-2a62-4080-95d8-4f60a523efaa" path="/var/lib/kubelet/pods/1471c070-2a62-4080-95d8-4f60a523efaa/volumes" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.392342 4688 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podcbbd26aa-7783-4958-95d0-a590f636947c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podcbbd26aa-7783-4958-95d0-a590f636947c] : Timed out while waiting for systemd to remove kubepods-besteffort-podcbbd26aa_7783_4958_95d0_a590f636947c.slice" Jan 23 18:29:21 crc kubenswrapper[4688]: E0123 18:29:21.392399 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podcbbd26aa-7783-4958-95d0-a590f636947c] : unable to destroy cgroup paths for cgroup [kubepods besteffort podcbbd26aa-7783-4958-95d0-a590f636947c] : Timed out while waiting for systemd to remove kubepods-besteffort-podcbbd26aa_7783_4958_95d0_a590f636947c.slice" pod="openstack/ceilometer-0" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.614948 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.851424 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.853528 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e25f1cb-df6e-441a-ba49-b8de51d05434","Type":"ContainerStarted","Data":"98722d43140c8a56f0bf19b2db76d11031c1339ff6a559985fde8faa24b18124"} Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.854501 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0e25f1cb-df6e-441a-ba49-b8de51d05434","Type":"ContainerStarted","Data":"1b4686a9080c3d8f477ddc4ab7368fa0a6b6723516f9755ba479468b61e01cc2"} Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.854680 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-a582-account-create-update-x7kt9" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.915314 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.915289339 podStartE2EDuration="2.915289339s" podCreationTimestamp="2026-01-23 18:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:29:21.882450889 +0000 UTC m=+1356.878275330" watchObservedRunningTime="2026-01-23 18:29:21.915289339 +0000 UTC m=+1356.911113790" Jan 23 18:29:21 crc kubenswrapper[4688]: I0123 18:29:21.992275 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.008632 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.019536 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:22 crc kubenswrapper[4688]: E0123 18:29:22.020148 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb07c9fd-8a23-4726-8825-2c877f74f27c" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020166 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb07c9fd-8a23-4726-8825-2c877f74f27c" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: E0123 18:29:22.020199 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c823d536-422a-4bf8-9959-741070231ff4" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020209 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c823d536-422a-4bf8-9959-741070231ff4" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: E0123 18:29:22.020231 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb41621-9757-493e-8164-6822693e8106" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020240 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb41621-9757-493e-8164-6822693e8106" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: E0123 18:29:22.020288 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b04947b-c624-4375-805e-43988d26b5aa" containerName="mariadb-database-create" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020297 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b04947b-c624-4375-805e-43988d26b5aa" containerName="mariadb-database-create" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020554 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b04947b-c624-4375-805e-43988d26b5aa" containerName="mariadb-database-create" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020581 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb41621-9757-493e-8164-6822693e8106" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020605 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c823d536-422a-4bf8-9959-741070231ff4" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.020621 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb07c9fd-8a23-4726-8825-2c877f74f27c" containerName="mariadb-account-create-update" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.028864 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.029031 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.033141 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.033355 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.119406 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.119470 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-config-data\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.119497 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-run-httpd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.119520 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-scripts\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.119583 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gdd\" (UniqueName: \"kubernetes.io/projected/e62147ba-ad27-4edd-9c92-978850dcad49-kube-api-access-w5gdd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.119970 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-log-httpd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.120138 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.222316 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.222373 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-config-data\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.222397 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-run-httpd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.222930 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-run-httpd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.222423 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-scripts\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.223065 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gdd\" (UniqueName: \"kubernetes.io/projected/e62147ba-ad27-4edd-9c92-978850dcad49-kube-api-access-w5gdd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.223556 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-log-httpd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.223869 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-log-httpd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.223937 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.229536 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.230861 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.236405 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-scripts\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.237773 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-config-data\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.251306 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gdd\" (UniqueName: \"kubernetes.io/projected/e62147ba-ad27-4edd-9c92-978850dcad49-kube-api-access-w5gdd\") pod \"ceilometer-0\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.356623 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:29:22 crc kubenswrapper[4688]: W0123 18:29:22.939139 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode62147ba_ad27_4edd_9c92_978850dcad49.slice/crio-1c44d6a8c9e15b901cc40e5aa5592782cc905e81e596d0f364ac3d636297c911 WatchSource:0}: Error finding container 1c44d6a8c9e15b901cc40e5aa5592782cc905e81e596d0f364ac3d636297c911: Status 404 returned error can't find the container with id 1c44d6a8c9e15b901cc40e5aa5592782cc905e81e596d0f364ac3d636297c911 Jan 23 18:29:22 crc kubenswrapper[4688]: I0123 18:29:22.942680 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:23 crc kubenswrapper[4688]: I0123 18:29:23.368975 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbbd26aa-7783-4958-95d0-a590f636947c" path="/var/lib/kubelet/pods/cbbd26aa-7783-4958-95d0-a590f636947c/volumes" Jan 23 18:29:23 crc kubenswrapper[4688]: I0123 18:29:23.874554 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerStarted","Data":"1c44d6a8c9e15b901cc40e5aa5592782cc905e81e596d0f364ac3d636297c911"} Jan 23 18:29:24 crc kubenswrapper[4688]: I0123 18:29:24.887659 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerStarted","Data":"d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5"} Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.819892 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gf29"] Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.821662 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.828833 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.829408 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.829624 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ff5fc" Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.843788 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gf29"] Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.910077 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerStarted","Data":"ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c"} Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.910136 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerStarted","Data":"248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522"} Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.911243 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zdf\" (UniqueName: \"kubernetes.io/projected/1483d3ee-c9ce-41d9-939c-caa781261c00-kube-api-access-v9zdf\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.911310 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-scripts\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.911363 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:25 crc kubenswrapper[4688]: I0123 18:29:25.911482 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-config-data\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.013424 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9zdf\" (UniqueName: \"kubernetes.io/projected/1483d3ee-c9ce-41d9-939c-caa781261c00-kube-api-access-v9zdf\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.013521 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-scripts\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.013578 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.013708 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-config-data\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.020919 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.023787 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-config-data\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.043874 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9zdf\" (UniqueName: \"kubernetes.io/projected/1483d3ee-c9ce-41d9-939c-caa781261c00-kube-api-access-v9zdf\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.048002 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-scripts\") pod \"nova-cell0-conductor-db-sync-8gf29\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.139348 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.864620 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gf29"] Jan 23 18:29:26 crc kubenswrapper[4688]: I0123 18:29:26.924523 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gf29" event={"ID":"1483d3ee-c9ce-41d9-939c-caa781261c00","Type":"ContainerStarted","Data":"3a48e4cf6d34d9863393542e107374082169cd4307173580fd60a27237a3fdfb"} Jan 23 18:29:27 crc kubenswrapper[4688]: I0123 18:29:27.937744 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerStarted","Data":"5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e"} Jan 23 18:29:27 crc kubenswrapper[4688]: I0123 18:29:27.938354 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:29:27 crc kubenswrapper[4688]: I0123 18:29:27.982648 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.966422326 podStartE2EDuration="6.982627989s" podCreationTimestamp="2026-01-23 18:29:21 +0000 UTC" firstStartedPulling="2026-01-23 18:29:22.941028558 +0000 UTC m=+1357.936852999" lastFinishedPulling="2026-01-23 18:29:26.957234221 +0000 UTC m=+1361.953058662" observedRunningTime="2026-01-23 18:29:27.973945488 +0000 UTC m=+1362.969769949" watchObservedRunningTime="2026-01-23 18:29:27.982627989 +0000 UTC m=+1362.978452430" Jan 23 18:29:28 crc kubenswrapper[4688]: I0123 18:29:28.260706 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:29:28 crc kubenswrapper[4688]: I0123 18:29:28.497698 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-689f6b4f86-pbwfh" podUID="56f27597-f638-4b6d-84e9-3a3671c089ac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.011889 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.012533 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-central-agent" containerID="cri-o://d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5" gracePeriod=30 Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.012616 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="proxy-httpd" containerID="cri-o://5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e" gracePeriod=30 Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.012630 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-notification-agent" containerID="cri-o://248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522" gracePeriod=30 Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.012636 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="sg-core" containerID="cri-o://ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c" gracePeriod=30 Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.132324 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.177770 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:30 crc kubenswrapper[4688]: E0123 18:29:30.533909 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode62147ba_ad27_4edd_9c92_978850dcad49.slice/crio-248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode62147ba_ad27_4edd_9c92_978850dcad49.slice/crio-conmon-248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.982169 4688 generic.go:334] "Generic (PLEG): container finished" podID="e62147ba-ad27-4edd-9c92-978850dcad49" containerID="5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e" exitCode=0 Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.982504 4688 generic.go:334] "Generic (PLEG): container finished" podID="e62147ba-ad27-4edd-9c92-978850dcad49" containerID="ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c" exitCode=2 Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.982516 4688 generic.go:334] "Generic (PLEG): container finished" podID="e62147ba-ad27-4edd-9c92-978850dcad49" containerID="248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522" exitCode=0 Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.982352 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerDied","Data":"5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e"} Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.982611 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerDied","Data":"ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c"} Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.982647 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerDied","Data":"248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522"} Jan 23 18:29:30 crc kubenswrapper[4688]: I0123 18:29:30.982900 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:31 crc kubenswrapper[4688]: I0123 18:29:31.015297 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 18:29:35 crc kubenswrapper[4688]: I0123 18:29:35.235300 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:29:35 crc kubenswrapper[4688]: I0123 18:29:35.236121 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-log" containerID="cri-o://c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e" gracePeriod=30 Jan 23 18:29:35 crc kubenswrapper[4688]: I0123 18:29:35.236303 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-httpd" containerID="cri-o://1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188" gracePeriod=30 Jan 23 18:29:36 crc kubenswrapper[4688]: I0123 18:29:36.078664 4688 generic.go:334] "Generic (PLEG): container finished" podID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerID="c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e" exitCode=143 Jan 23 18:29:36 crc kubenswrapper[4688]: I0123 18:29:36.078722 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a335d28-2e6a-428b-8eb6-9a91c8150833","Type":"ContainerDied","Data":"c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e"} Jan 23 18:29:36 crc kubenswrapper[4688]: I0123 18:29:36.213750 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:29:36 crc kubenswrapper[4688]: I0123 18:29:36.214306 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-log" containerID="cri-o://cdc0e255c1dddc4d207fda1f9985a2821c8145c1d47a84ebdda4f60f19e032a2" gracePeriod=30 Jan 23 18:29:36 crc kubenswrapper[4688]: I0123 18:29:36.214856 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-httpd" containerID="cri-o://34b71fd80089ea0d8c7559b2e1f370c029654e884dd53ee14302b5de033a4ba8" gracePeriod=30 Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.093902 4688 generic.go:334] "Generic (PLEG): container finished" podID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerID="cdc0e255c1dddc4d207fda1f9985a2821c8145c1d47a84ebdda4f60f19e032a2" exitCode=143 Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.093966 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6944ad2d-9b21-468f-aaaf-66adbbe5dc23","Type":"ContainerDied","Data":"cdc0e255c1dddc4d207fda1f9985a2821c8145c1d47a84ebdda4f60f19e032a2"} Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.831221 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.928144 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-config-data\") pod \"e62147ba-ad27-4edd-9c92-978850dcad49\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.928372 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5gdd\" (UniqueName: \"kubernetes.io/projected/e62147ba-ad27-4edd-9c92-978850dcad49-kube-api-access-w5gdd\") pod \"e62147ba-ad27-4edd-9c92-978850dcad49\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.928453 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-run-httpd\") pod \"e62147ba-ad27-4edd-9c92-978850dcad49\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.928492 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-sg-core-conf-yaml\") pod \"e62147ba-ad27-4edd-9c92-978850dcad49\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.928549 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-combined-ca-bundle\") pod \"e62147ba-ad27-4edd-9c92-978850dcad49\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.928580 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-log-httpd\") pod \"e62147ba-ad27-4edd-9c92-978850dcad49\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.928678 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-scripts\") pod \"e62147ba-ad27-4edd-9c92-978850dcad49\" (UID: \"e62147ba-ad27-4edd-9c92-978850dcad49\") " Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.930076 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e62147ba-ad27-4edd-9c92-978850dcad49" (UID: "e62147ba-ad27-4edd-9c92-978850dcad49"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.930707 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e62147ba-ad27-4edd-9c92-978850dcad49" (UID: "e62147ba-ad27-4edd-9c92-978850dcad49"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.933787 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-scripts" (OuterVolumeSpecName: "scripts") pod "e62147ba-ad27-4edd-9c92-978850dcad49" (UID: "e62147ba-ad27-4edd-9c92-978850dcad49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.934776 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62147ba-ad27-4edd-9c92-978850dcad49-kube-api-access-w5gdd" (OuterVolumeSpecName: "kube-api-access-w5gdd") pod "e62147ba-ad27-4edd-9c92-978850dcad49" (UID: "e62147ba-ad27-4edd-9c92-978850dcad49"). InnerVolumeSpecName "kube-api-access-w5gdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:37 crc kubenswrapper[4688]: I0123 18:29:37.959901 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e62147ba-ad27-4edd-9c92-978850dcad49" (UID: "e62147ba-ad27-4edd-9c92-978850dcad49"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.031689 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5gdd\" (UniqueName: \"kubernetes.io/projected/e62147ba-ad27-4edd-9c92-978850dcad49-kube-api-access-w5gdd\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.032635 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.032722 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.032788 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e62147ba-ad27-4edd-9c92-978850dcad49-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.032853 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.040510 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e62147ba-ad27-4edd-9c92-978850dcad49" (UID: "e62147ba-ad27-4edd-9c92-978850dcad49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.046160 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-config-data" (OuterVolumeSpecName: "config-data") pod "e62147ba-ad27-4edd-9c92-978850dcad49" (UID: "e62147ba-ad27-4edd-9c92-978850dcad49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.109333 4688 generic.go:334] "Generic (PLEG): container finished" podID="e62147ba-ad27-4edd-9c92-978850dcad49" containerID="d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5" exitCode=0 Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.109422 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerDied","Data":"d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5"} Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.109464 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e62147ba-ad27-4edd-9c92-978850dcad49","Type":"ContainerDied","Data":"1c44d6a8c9e15b901cc40e5aa5592782cc905e81e596d0f364ac3d636297c911"} Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.109488 4688 scope.go:117] "RemoveContainer" containerID="5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.109695 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.115175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gf29" event={"ID":"1483d3ee-c9ce-41d9-939c-caa781261c00","Type":"ContainerStarted","Data":"87d68444f9ab664301455c7166f3f21f6146a91e7cf6b7a910a5c041f056d061"} Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.141163 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.141236 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62147ba-ad27-4edd-9c92-978850dcad49-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.143711 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-8gf29" podStartSLOduration=2.791716285 podStartE2EDuration="13.143659807s" podCreationTimestamp="2026-01-23 18:29:25 +0000 UTC" firstStartedPulling="2026-01-23 18:29:26.863693564 +0000 UTC m=+1361.859518005" lastFinishedPulling="2026-01-23 18:29:37.215637086 +0000 UTC m=+1372.211461527" observedRunningTime="2026-01-23 18:29:38.134462843 +0000 UTC m=+1373.130287284" watchObservedRunningTime="2026-01-23 18:29:38.143659807 +0000 UTC m=+1373.139484258" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.149995 4688 scope.go:117] "RemoveContainer" containerID="ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.191313 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.208097 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.215026 4688 scope.go:117] "RemoveContainer" containerID="248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231089 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.231664 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="proxy-httpd" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231683 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="proxy-httpd" Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.231706 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="sg-core" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231712 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="sg-core" Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.231726 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-central-agent" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231735 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-central-agent" Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.231746 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-notification-agent" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231752 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-notification-agent" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231958 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-notification-agent" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231973 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="proxy-httpd" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.231993 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="sg-core" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.232004 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" containerName="ceilometer-central-agent" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.233857 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.237800 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.237822 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.246743 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.258041 4688 scope.go:117] "RemoveContainer" containerID="d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.305311 4688 scope.go:117] "RemoveContainer" containerID="5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e" Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.305922 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e\": container with ID starting with 5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e not found: ID does not exist" containerID="5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.305967 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e"} err="failed to get container status \"5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e\": rpc error: code = NotFound desc = could not find container \"5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e\": container with ID starting with 5a75518fd98ffc893ef68f3d7d053ac2145d176bbbf02a6dd78c24e1f1d1277e not found: ID does not exist" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.305993 4688 scope.go:117] "RemoveContainer" containerID="ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c" Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.306439 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c\": container with ID starting with ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c not found: ID does not exist" containerID="ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.306469 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c"} err="failed to get container status \"ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c\": rpc error: code = NotFound desc = could not find container \"ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c\": container with ID starting with ef625d1d257ba167f92b6bb5e50cf45439a361fc8fd6203a412b5c2d7d79637c not found: ID does not exist" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.306484 4688 scope.go:117] "RemoveContainer" containerID="248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522" Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.307133 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522\": container with ID starting with 248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522 not found: ID does not exist" containerID="248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.307160 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522"} err="failed to get container status \"248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522\": rpc error: code = NotFound desc = could not find container \"248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522\": container with ID starting with 248e432b838059500df845c6a7ca9d56344d2d2b412bf253ddb96abd51658522 not found: ID does not exist" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.307173 4688 scope.go:117] "RemoveContainer" containerID="d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5" Jan 23 18:29:38 crc kubenswrapper[4688]: E0123 18:29:38.307411 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5\": container with ID starting with d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5 not found: ID does not exist" containerID="d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.307431 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5"} err="failed to get container status \"d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5\": rpc error: code = NotFound desc = could not find container \"d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5\": container with ID starting with d32bf916138c73c32320d2ab622565edfc13f5ff34421224c3015a9a219e92f5 not found: ID does not exist" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.378712 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-scripts\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.378814 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96nhg\" (UniqueName: \"kubernetes.io/projected/b458ac0e-0717-485d-8665-e46e63bdc1bd-kube-api-access-96nhg\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.378912 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.378970 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-config-data\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.378992 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-log-httpd\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.379052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-run-httpd\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.379089 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.398157 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": read tcp 10.217.0.2:44482->10.217.0.169:9292: read: connection reset by peer" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.398265 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9292/healthcheck\": read tcp 10.217.0.2:44470->10.217.0.169:9292: read: connection reset by peer" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.484223 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-config-data\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.484283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-log-httpd\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.484436 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-run-httpd\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.484501 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.484619 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-scripts\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.484728 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96nhg\" (UniqueName: \"kubernetes.io/projected/b458ac0e-0717-485d-8665-e46e63bdc1bd-kube-api-access-96nhg\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.484936 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.486072 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-run-httpd\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.487632 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-log-httpd\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.491044 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-scripts\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.491231 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.492695 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-config-data\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.493113 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.512628 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96nhg\" (UniqueName: \"kubernetes.io/projected/b458ac0e-0717-485d-8665-e46e63bdc1bd-kube-api-access-96nhg\") pod \"ceilometer-0\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.600664 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.896814 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996147 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j6bw\" (UniqueName: \"kubernetes.io/projected/8a335d28-2e6a-428b-8eb6-9a91c8150833-kube-api-access-4j6bw\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996291 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996447 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-scripts\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996473 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-combined-ca-bundle\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996518 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-logs\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996541 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-httpd-run\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996565 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-public-tls-certs\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:38 crc kubenswrapper[4688]: I0123 18:29:38.996628 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-config-data\") pod \"8a335d28-2e6a-428b-8eb6-9a91c8150833\" (UID: \"8a335d28-2e6a-428b-8eb6-9a91c8150833\") " Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.000275 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.000153 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-logs" (OuterVolumeSpecName: "logs") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.006702 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.006995 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-scripts" (OuterVolumeSpecName: "scripts") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.008988 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a335d28-2e6a-428b-8eb6-9a91c8150833-kube-api-access-4j6bw" (OuterVolumeSpecName: "kube-api-access-4j6bw") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "kube-api-access-4j6bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.045560 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.078715 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.169197 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.169260 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.169275 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.169308 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8a335d28-2e6a-428b-8eb6-9a91c8150833-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.169323 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.169353 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j6bw\" (UniqueName: \"kubernetes.io/projected/8a335d28-2e6a-428b-8eb6-9a91c8150833-kube-api-access-4j6bw\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.169424 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.177253 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-config-data" (OuterVolumeSpecName: "config-data") pod "8a335d28-2e6a-428b-8eb6-9a91c8150833" (UID: "8a335d28-2e6a-428b-8eb6-9a91c8150833"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.215352 4688 generic.go:334] "Generic (PLEG): container finished" podID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerID="1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188" exitCode=0 Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.216684 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.219506 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a335d28-2e6a-428b-8eb6-9a91c8150833","Type":"ContainerDied","Data":"1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188"} Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.219583 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8a335d28-2e6a-428b-8eb6-9a91c8150833","Type":"ContainerDied","Data":"685d3abefe9b67fe30a8d3144b4fa757904bb44be394192eeed536a28e33d894"} Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.219608 4688 scope.go:117] "RemoveContainer" containerID="1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.224536 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.239886 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.277934 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a335d28-2e6a-428b-8eb6-9a91c8150833-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.278583 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.298541 4688 scope.go:117] "RemoveContainer" containerID="c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.310289 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.328259 4688 scope.go:117] "RemoveContainer" containerID="1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188" Jan 23 18:29:39 crc kubenswrapper[4688]: E0123 18:29:39.328880 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188\": container with ID starting with 1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188 not found: ID does not exist" containerID="1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.328932 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188"} err="failed to get container status \"1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188\": rpc error: code = NotFound desc = could not find container \"1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188\": container with ID starting with 1c8406af6e7ba3835297317a4a557eb88fa909ea97f9e0accbcfe9db2f1bb188 not found: ID does not exist" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.328962 4688 scope.go:117] "RemoveContainer" containerID="c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e" Jan 23 18:29:39 crc kubenswrapper[4688]: E0123 18:29:39.330700 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e\": container with ID starting with c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e not found: ID does not exist" containerID="c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.330737 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e"} err="failed to get container status \"c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e\": rpc error: code = NotFound desc = could not find container \"c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e\": container with ID starting with c3c7daf739ed5eadcb0d5ec6d021ddfaac70e5b00e1b838cb454446c36d4955e not found: ID does not exist" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.344803 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.382139 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" path="/var/lib/kubelet/pods/8a335d28-2e6a-428b-8eb6-9a91c8150833/volumes" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.383767 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62147ba-ad27-4edd-9c92-978850dcad49" path="/var/lib/kubelet/pods/e62147ba-ad27-4edd-9c92-978850dcad49/volumes" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.384795 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:29:39 crc kubenswrapper[4688]: E0123 18:29:39.385357 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-log" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.385378 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-log" Jan 23 18:29:39 crc kubenswrapper[4688]: E0123 18:29:39.385402 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-httpd" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.385410 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-httpd" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.385728 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-log" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.385772 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a335d28-2e6a-428b-8eb6-9a91c8150833" containerName="glance-httpd" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.387918 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.390828 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.391050 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.400510 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.584779 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.584846 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.585000 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.585042 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.585074 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.585098 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-logs\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.585118 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.585135 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s4fc\" (UniqueName: \"kubernetes.io/projected/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-kube-api-access-9s4fc\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687026 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-logs\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687079 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s4fc\" (UniqueName: \"kubernetes.io/projected/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-kube-api-access-9s4fc\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687182 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687219 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687314 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687377 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687664 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-logs\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.687916 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.688390 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.693503 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.703776 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.711342 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.713514 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.716013 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s4fc\" (UniqueName: \"kubernetes.io/projected/d00dfb95-d6b9-42c5-bd68-91cba08b97b4-kube-api-access-9s4fc\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:39 crc kubenswrapper[4688]: I0123 18:29:39.750860 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d00dfb95-d6b9-42c5-bd68-91cba08b97b4\") " pod="openstack/glance-default-external-api-0" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.017382 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.274173 4688 generic.go:334] "Generic (PLEG): container finished" podID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerID="34b71fd80089ea0d8c7559b2e1f370c029654e884dd53ee14302b5de033a4ba8" exitCode=0 Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.274631 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6944ad2d-9b21-468f-aaaf-66adbbe5dc23","Type":"ContainerDied","Data":"34b71fd80089ea0d8c7559b2e1f370c029654e884dd53ee14302b5de033a4ba8"} Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.281496 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerStarted","Data":"afde590cf43eefe3989f33ad811e2893d8c5c7c5abdc5eabfbb67da33a76c11f"} Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.411119 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519105 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-combined-ca-bundle\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519364 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-logs\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519414 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-scripts\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519440 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q79z\" (UniqueName: \"kubernetes.io/projected/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-kube-api-access-8q79z\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519539 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-config-data\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519562 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-httpd-run\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-internal-tls-certs\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.519684 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\" (UID: \"6944ad2d-9b21-468f-aaaf-66adbbe5dc23\") " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.520018 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-logs" (OuterVolumeSpecName: "logs") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.520480 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.520902 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.520923 4688 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.527156 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-kube-api-access-8q79z" (OuterVolumeSpecName: "kube-api-access-8q79z") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "kube-api-access-8q79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.527374 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-scripts" (OuterVolumeSpecName: "scripts") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.532463 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.567304 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.592162 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-config-data" (OuterVolumeSpecName: "config-data") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.606725 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6944ad2d-9b21-468f-aaaf-66adbbe5dc23" (UID: "6944ad2d-9b21-468f-aaaf-66adbbe5dc23"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.631196 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.631254 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.631265 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.631274 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.631283 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q79z\" (UniqueName: \"kubernetes.io/projected/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-kube-api-access-8q79z\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.631292 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6944ad2d-9b21-468f-aaaf-66adbbe5dc23-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.651957 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.735251 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:40 crc kubenswrapper[4688]: I0123 18:29:40.749064 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.292791 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d00dfb95-d6b9-42c5-bd68-91cba08b97b4","Type":"ContainerStarted","Data":"bfa6740acf280f1e526d9890cb663ebc970d2219ecaad0cb1efd288197b272d2"} Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.294992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6944ad2d-9b21-468f-aaaf-66adbbe5dc23","Type":"ContainerDied","Data":"d10db3724dee6951326de587d7b9d25f55c97a9d501a4665c79d5413a962b372"} Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.295056 4688 scope.go:117] "RemoveContainer" containerID="34b71fd80089ea0d8c7559b2e1f370c029654e884dd53ee14302b5de033a4ba8" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.295216 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.297625 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerStarted","Data":"c8ff82b33f5bd37b332f1c3459cb069d7b4168296ca70c1df2ae60a368154838"} Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.343837 4688 scope.go:117] "RemoveContainer" containerID="cdc0e255c1dddc4d207fda1f9985a2821c8145c1d47a84ebdda4f60f19e032a2" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.396491 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.432323 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.445901 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:29:41 crc kubenswrapper[4688]: E0123 18:29:41.446636 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-log" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.446662 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-log" Jan 23 18:29:41 crc kubenswrapper[4688]: E0123 18:29:41.446704 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-httpd" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.446713 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-httpd" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.446974 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-httpd" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.447005 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" containerName="glance-log" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.448496 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.451042 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.451611 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.476350 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555493 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555623 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555659 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555688 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555752 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555801 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrn88\" (UniqueName: \"kubernetes.io/projected/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-kube-api-access-hrn88\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.555874 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658142 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658220 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658261 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658336 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658391 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658415 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrn88\" (UniqueName: \"kubernetes.io/projected/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-kube-api-access-hrn88\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658444 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.658493 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.659483 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-logs\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.659650 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.659838 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.664907 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.664996 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.665282 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.686853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrn88\" (UniqueName: \"kubernetes.io/projected/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-kube-api-access-hrn88\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.691214 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa9f2c9d-a6e3-43fb-9601-ce24f5e89417-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.731281 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417\") " pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.770488 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:41 crc kubenswrapper[4688]: I0123 18:29:41.996666 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:29:42 crc kubenswrapper[4688]: I0123 18:29:42.002797 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:29:42 crc kubenswrapper[4688]: I0123 18:29:42.324634 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerStarted","Data":"6fc764cf04af30997a744e38bf48611aa9e87a7dfea300ea24116faceb5d68f4"} Jan 23 18:29:42 crc kubenswrapper[4688]: I0123 18:29:42.338518 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d00dfb95-d6b9-42c5-bd68-91cba08b97b4","Type":"ContainerStarted","Data":"68cf16958e79d8e606aed47d029f9f823ee35e5689754d2a3cecfde8f4236a00"} Jan 23 18:29:42 crc kubenswrapper[4688]: I0123 18:29:42.444653 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 18:29:43 crc kubenswrapper[4688]: I0123 18:29:43.354822 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerStarted","Data":"eef0ce3d3be20f70f3950b4e285000da3bf251b24bb3c7dd9d4130ce65886c21"} Jan 23 18:29:43 crc kubenswrapper[4688]: I0123 18:29:43.391219 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6944ad2d-9b21-468f-aaaf-66adbbe5dc23" path="/var/lib/kubelet/pods/6944ad2d-9b21-468f-aaaf-66adbbe5dc23/volumes" Jan 23 18:29:43 crc kubenswrapper[4688]: I0123 18:29:43.392133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d00dfb95-d6b9-42c5-bd68-91cba08b97b4","Type":"ContainerStarted","Data":"49fbd7ffcc955db4a0061ac873bbf4dc45ce7f5c51ca858e9c85e237ae3626ff"} Jan 23 18:29:43 crc kubenswrapper[4688]: I0123 18:29:43.392160 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417","Type":"ContainerStarted","Data":"994af6cd04b9e8991642fea491d43ed464cc13c271553ee6a7cb48a5937d641f"} Jan 23 18:29:43 crc kubenswrapper[4688]: I0123 18:29:43.392172 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417","Type":"ContainerStarted","Data":"250f9fc0f9b0c82e5a0dba389864de2ca0cf4aa55ce0b51acf6a331eca0e5052"} Jan 23 18:29:43 crc kubenswrapper[4688]: I0123 18:29:43.431976 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.431942729 podStartE2EDuration="4.431942729s" podCreationTimestamp="2026-01-23 18:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:29:43.402333732 +0000 UTC m=+1378.398158193" watchObservedRunningTime="2026-01-23 18:29:43.431942729 +0000 UTC m=+1378.427767180" Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.313738 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-689f6b4f86-pbwfh" Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.400798 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c854fbb9b-lr4lr"] Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.401057 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon-log" containerID="cri-o://f83895854bacf2798dc3dc8ac4b2a50c9ea0930b9527f30a323ef71f1d6f96e2" gracePeriod=30 Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.401627 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" containerID="cri-o://334fe52ece4f91dd7ce55d73d8d16cb635250937aea94cfccd2aa29041b1f9e8" gracePeriod=30 Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.409359 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.411257 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.412310 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerStarted","Data":"fc2f561acdd594915315147ec57ca207e3eb6e6985c8e305f3b32bb741593feb"} Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.423742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aa9f2c9d-a6e3-43fb-9601-ce24f5e89417","Type":"ContainerStarted","Data":"6c6a87ec420894578db129060f535c472024bcb75b4df7a6f833c564c01bd9ad"} Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.462149 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6614165619999999 podStartE2EDuration="6.462125334s" podCreationTimestamp="2026-01-23 18:29:38 +0000 UTC" firstStartedPulling="2026-01-23 18:29:39.236171621 +0000 UTC m=+1374.231996062" lastFinishedPulling="2026-01-23 18:29:44.036880393 +0000 UTC m=+1379.032704834" observedRunningTime="2026-01-23 18:29:44.441639087 +0000 UTC m=+1379.437463518" watchObservedRunningTime="2026-01-23 18:29:44.462125334 +0000 UTC m=+1379.457949775" Jan 23 18:29:44 crc kubenswrapper[4688]: I0123 18:29:44.495808 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.495783307 podStartE2EDuration="3.495783307s" podCreationTimestamp="2026-01-23 18:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:29:44.483071603 +0000 UTC m=+1379.478896064" watchObservedRunningTime="2026-01-23 18:29:44.495783307 +0000 UTC m=+1379.491607748" Jan 23 18:29:47 crc kubenswrapper[4688]: I0123 18:29:47.558129 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:38488->10.217.0.155:8443: read: connection reset by peer" Jan 23 18:29:48 crc kubenswrapper[4688]: I0123 18:29:48.258919 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:29:48 crc kubenswrapper[4688]: I0123 18:29:48.474866 4688 generic.go:334] "Generic (PLEG): container finished" podID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerID="334fe52ece4f91dd7ce55d73d8d16cb635250937aea94cfccd2aa29041b1f9e8" exitCode=0 Jan 23 18:29:48 crc kubenswrapper[4688]: I0123 18:29:48.474929 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerDied","Data":"334fe52ece4f91dd7ce55d73d8d16cb635250937aea94cfccd2aa29041b1f9e8"} Jan 23 18:29:48 crc kubenswrapper[4688]: I0123 18:29:48.474982 4688 scope.go:117] "RemoveContainer" containerID="edc9f72973727b10898539eabd6253423ace5c0db70c399aa7d84e12ce7541f6" Jan 23 18:29:50 crc kubenswrapper[4688]: I0123 18:29:50.018140 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 18:29:50 crc kubenswrapper[4688]: I0123 18:29:50.019285 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 18:29:50 crc kubenswrapper[4688]: I0123 18:29:50.056706 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 18:29:50 crc kubenswrapper[4688]: I0123 18:29:50.060821 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 18:29:50 crc kubenswrapper[4688]: I0123 18:29:50.511393 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 18:29:50 crc kubenswrapper[4688]: I0123 18:29:50.511670 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 18:29:51 crc kubenswrapper[4688]: I0123 18:29:51.771159 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:51 crc kubenswrapper[4688]: I0123 18:29:51.773780 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:51 crc kubenswrapper[4688]: I0123 18:29:51.803785 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:51 crc kubenswrapper[4688]: I0123 18:29:51.818071 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:52 crc kubenswrapper[4688]: I0123 18:29:52.530971 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:29:52 crc kubenswrapper[4688]: I0123 18:29:52.531004 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:29:52 crc kubenswrapper[4688]: I0123 18:29:52.532307 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:52 crc kubenswrapper[4688]: I0123 18:29:52.532338 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:52 crc kubenswrapper[4688]: I0123 18:29:52.730570 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 18:29:52 crc kubenswrapper[4688]: I0123 18:29:52.737439 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 18:29:54 crc kubenswrapper[4688]: I0123 18:29:54.576997 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:29:54 crc kubenswrapper[4688]: I0123 18:29:54.577039 4688 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:29:54 crc kubenswrapper[4688]: I0123 18:29:54.650422 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:54 crc kubenswrapper[4688]: I0123 18:29:54.659645 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 18:29:55 crc kubenswrapper[4688]: I0123 18:29:55.591916 4688 generic.go:334] "Generic (PLEG): container finished" podID="1483d3ee-c9ce-41d9-939c-caa781261c00" containerID="87d68444f9ab664301455c7166f3f21f6146a91e7cf6b7a910a5c041f056d061" exitCode=0 Jan 23 18:29:55 crc kubenswrapper[4688]: I0123 18:29:55.592029 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gf29" event={"ID":"1483d3ee-c9ce-41d9-939c-caa781261c00","Type":"ContainerDied","Data":"87d68444f9ab664301455c7166f3f21f6146a91e7cf6b7a910a5c041f056d061"} Jan 23 18:29:56 crc kubenswrapper[4688]: I0123 18:29:56.945131 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.064641 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-scripts\") pod \"1483d3ee-c9ce-41d9-939c-caa781261c00\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.064875 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-combined-ca-bundle\") pod \"1483d3ee-c9ce-41d9-939c-caa781261c00\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.064919 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-config-data\") pod \"1483d3ee-c9ce-41d9-939c-caa781261c00\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.064948 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9zdf\" (UniqueName: \"kubernetes.io/projected/1483d3ee-c9ce-41d9-939c-caa781261c00-kube-api-access-v9zdf\") pod \"1483d3ee-c9ce-41d9-939c-caa781261c00\" (UID: \"1483d3ee-c9ce-41d9-939c-caa781261c00\") " Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.073382 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1483d3ee-c9ce-41d9-939c-caa781261c00-kube-api-access-v9zdf" (OuterVolumeSpecName: "kube-api-access-v9zdf") pod "1483d3ee-c9ce-41d9-939c-caa781261c00" (UID: "1483d3ee-c9ce-41d9-939c-caa781261c00"). InnerVolumeSpecName "kube-api-access-v9zdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.080406 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-scripts" (OuterVolumeSpecName: "scripts") pod "1483d3ee-c9ce-41d9-939c-caa781261c00" (UID: "1483d3ee-c9ce-41d9-939c-caa781261c00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.094511 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1483d3ee-c9ce-41d9-939c-caa781261c00" (UID: "1483d3ee-c9ce-41d9-939c-caa781261c00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.100109 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-config-data" (OuterVolumeSpecName: "config-data") pod "1483d3ee-c9ce-41d9-939c-caa781261c00" (UID: "1483d3ee-c9ce-41d9-939c-caa781261c00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.168031 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.168075 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9zdf\" (UniqueName: \"kubernetes.io/projected/1483d3ee-c9ce-41d9-939c-caa781261c00-kube-api-access-v9zdf\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.168088 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.168096 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1483d3ee-c9ce-41d9-939c-caa781261c00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.620929 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-8gf29" event={"ID":"1483d3ee-c9ce-41d9-939c-caa781261c00","Type":"ContainerDied","Data":"3a48e4cf6d34d9863393542e107374082169cd4307173580fd60a27237a3fdfb"} Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.621259 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a48e4cf6d34d9863393542e107374082169cd4307173580fd60a27237a3fdfb" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.621174 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-8gf29" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.727955 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 18:29:57 crc kubenswrapper[4688]: E0123 18:29:57.728561 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1483d3ee-c9ce-41d9-939c-caa781261c00" containerName="nova-cell0-conductor-db-sync" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.728585 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1483d3ee-c9ce-41d9-939c-caa781261c00" containerName="nova-cell0-conductor-db-sync" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.728878 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1483d3ee-c9ce-41d9-939c-caa781261c00" containerName="nova-cell0-conductor-db-sync" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.729817 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.732173 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ff5fc" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.738746 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.745860 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.781326 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7588894-f33b-452c-abfc-7576e58fbe4b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.781393 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7588894-f33b-452c-abfc-7576e58fbe4b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.781593 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bzb\" (UniqueName: \"kubernetes.io/projected/c7588894-f33b-452c-abfc-7576e58fbe4b-kube-api-access-j7bzb\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.883637 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7bzb\" (UniqueName: \"kubernetes.io/projected/c7588894-f33b-452c-abfc-7576e58fbe4b-kube-api-access-j7bzb\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.883786 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7588894-f33b-452c-abfc-7576e58fbe4b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.883829 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7588894-f33b-452c-abfc-7576e58fbe4b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.889353 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7588894-f33b-452c-abfc-7576e58fbe4b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.890581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7588894-f33b-452c-abfc-7576e58fbe4b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:57 crc kubenswrapper[4688]: I0123 18:29:57.902449 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7bzb\" (UniqueName: \"kubernetes.io/projected/c7588894-f33b-452c-abfc-7576e58fbe4b-kube-api-access-j7bzb\") pod \"nova-cell0-conductor-0\" (UID: \"c7588894-f33b-452c-abfc-7576e58fbe4b\") " pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:58 crc kubenswrapper[4688]: I0123 18:29:58.049883 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 18:29:58 crc kubenswrapper[4688]: I0123 18:29:58.259841 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:29:58 crc kubenswrapper[4688]: I0123 18:29:58.537979 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 18:29:58 crc kubenswrapper[4688]: W0123 18:29:58.538439 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7588894_f33b_452c_abfc_7576e58fbe4b.slice/crio-09deb1769025aa59951a537310732e40943125a5c0ec6404ee2c557ce9b824ec WatchSource:0}: Error finding container 09deb1769025aa59951a537310732e40943125a5c0ec6404ee2c557ce9b824ec: Status 404 returned error can't find the container with id 09deb1769025aa59951a537310732e40943125a5c0ec6404ee2c557ce9b824ec Jan 23 18:29:58 crc kubenswrapper[4688]: I0123 18:29:58.632708 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c7588894-f33b-452c-abfc-7576e58fbe4b","Type":"ContainerStarted","Data":"09deb1769025aa59951a537310732e40943125a5c0ec6404ee2c557ce9b824ec"} Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.149269 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9"] Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.151200 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.154469 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.155347 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.178955 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9"] Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.240068 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-config-volume\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.240546 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-secret-volume\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.240782 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqrwm\" (UniqueName: \"kubernetes.io/projected/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-kube-api-access-mqrwm\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.343214 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-config-volume\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.343445 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-secret-volume\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.343525 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqrwm\" (UniqueName: \"kubernetes.io/projected/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-kube-api-access-mqrwm\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.344250 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-config-volume\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.365231 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-secret-volume\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.366492 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqrwm\" (UniqueName: \"kubernetes.io/projected/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-kube-api-access-mqrwm\") pod \"collect-profiles-29486550-crlt9\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.478740 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.666007 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c7588894-f33b-452c-abfc-7576e58fbe4b","Type":"ContainerStarted","Data":"5c280be51825f63712c6b6144bb4c265915d749778faddb8924963df446aa1c9"} Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.666946 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.699753 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.699724681 podStartE2EDuration="3.699724681s" podCreationTimestamp="2026-01-23 18:29:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:00.689349174 +0000 UTC m=+1395.685173625" watchObservedRunningTime="2026-01-23 18:30:00.699724681 +0000 UTC m=+1395.695549122" Jan 23 18:30:00 crc kubenswrapper[4688]: I0123 18:30:00.942560 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9"] Jan 23 18:30:01 crc kubenswrapper[4688]: I0123 18:30:01.685702 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" event={"ID":"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0","Type":"ContainerStarted","Data":"135497b468f7a754e8a4fd47bf8448c2769a1e61ba86b493976da194e1d99baa"} Jan 23 18:30:01 crc kubenswrapper[4688]: I0123 18:30:01.686066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" event={"ID":"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0","Type":"ContainerStarted","Data":"c05ec503fa3e42e3c491cb543f9af74987f7c3653f115b280da3ad31aa0217cc"} Jan 23 18:30:01 crc kubenswrapper[4688]: I0123 18:30:01.707675 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" podStartSLOduration=1.707636578 podStartE2EDuration="1.707636578s" podCreationTimestamp="2026-01-23 18:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:01.704622471 +0000 UTC m=+1396.700446912" watchObservedRunningTime="2026-01-23 18:30:01.707636578 +0000 UTC m=+1396.703461019" Jan 23 18:30:02 crc kubenswrapper[4688]: I0123 18:30:02.694485 4688 generic.go:334] "Generic (PLEG): container finished" podID="5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" containerID="135497b468f7a754e8a4fd47bf8448c2769a1e61ba86b493976da194e1d99baa" exitCode=0 Jan 23 18:30:02 crc kubenswrapper[4688]: I0123 18:30:02.694590 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" event={"ID":"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0","Type":"ContainerDied","Data":"135497b468f7a754e8a4fd47bf8448c2769a1e61ba86b493976da194e1d99baa"} Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.066035 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.165398 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-config-volume\") pod \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.166017 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqrwm\" (UniqueName: \"kubernetes.io/projected/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-kube-api-access-mqrwm\") pod \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.166077 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-secret-volume\") pod \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\" (UID: \"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0\") " Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.166090 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" (UID: "5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.166640 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.172838 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" (UID: "5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.173123 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-kube-api-access-mqrwm" (OuterVolumeSpecName: "kube-api-access-mqrwm") pod "5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" (UID: "5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0"). InnerVolumeSpecName "kube-api-access-mqrwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.268568 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqrwm\" (UniqueName: \"kubernetes.io/projected/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-kube-api-access-mqrwm\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.268625 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.717518 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" event={"ID":"5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0","Type":"ContainerDied","Data":"c05ec503fa3e42e3c491cb543f9af74987f7c3653f115b280da3ad31aa0217cc"} Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.717575 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05ec503fa3e42e3c491cb543f9af74987f7c3653f115b280da3ad31aa0217cc" Jan 23 18:30:04 crc kubenswrapper[4688]: I0123 18:30:04.717978 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.080157 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.260345 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-c854fbb9b-lr4lr" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.545510 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9287c"] Jan 23 18:30:08 crc kubenswrapper[4688]: E0123 18:30:08.546270 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" containerName="collect-profiles" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.546349 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" containerName="collect-profiles" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.546728 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" containerName="collect-profiles" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.547837 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.552070 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.552080 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.560907 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9287c"] Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.614333 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.695578 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-config-data\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.695981 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-scripts\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.696640 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjbq4\" (UniqueName: \"kubernetes.io/projected/df94e7f5-9c11-410b-9513-d4e3350e1d29-kube-api-access-kjbq4\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.696846 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.798798 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.798906 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-config-data\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.799009 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-scripts\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.799044 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjbq4\" (UniqueName: \"kubernetes.io/projected/df94e7f5-9c11-410b-9513-d4e3350e1d29-kube-api-access-kjbq4\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.807191 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.807916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-config-data\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.822944 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjbq4\" (UniqueName: \"kubernetes.io/projected/df94e7f5-9c11-410b-9513-d4e3350e1d29-kube-api-access-kjbq4\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.825075 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.826844 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.829713 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-scripts\") pod \"nova-cell0-cell-mapping-9287c\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.842032 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.848688 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.874271 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.969272 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.970922 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.974591 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 18:30:08 crc kubenswrapper[4688]: I0123 18:30:08.999613 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.001660 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.005115 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.009596 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-config-data\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.009652 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.009696 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d47f4f4-8deb-4fbd-adad-1d248828b475-logs\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.009808 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz55b\" (UniqueName: \"kubernetes.io/projected/6d47f4f4-8deb-4fbd-adad-1d248828b475-kube-api-access-mz55b\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.030458 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.054700 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.056906 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.069951 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.114653 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9622f\" (UniqueName: \"kubernetes.io/projected/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-kube-api-access-9622f\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.114776 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-config-data\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.114814 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.114867 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-config-data\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.114909 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.114971 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d47f4f4-8deb-4fbd-adad-1d248828b475-logs\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.115005 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwr8l\" (UniqueName: \"kubernetes.io/projected/71b056ea-ae53-487f-a251-e4bba40fa78d-kube-api-access-rwr8l\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.115124 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.115218 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.115250 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz55b\" (UniqueName: \"kubernetes.io/projected/6d47f4f4-8deb-4fbd-adad-1d248828b475-kube-api-access-mz55b\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.118253 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d47f4f4-8deb-4fbd-adad-1d248828b475-logs\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.127478 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.128491 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-config-data\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.143482 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.155464 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz55b\" (UniqueName: \"kubernetes.io/projected/6d47f4f4-8deb-4fbd-adad-1d248828b475-kube-api-access-mz55b\") pod \"nova-api-0\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.164756 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.224542 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-logs\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.224859 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwr8l\" (UniqueName: \"kubernetes.io/projected/71b056ea-ae53-487f-a251-e4bba40fa78d-kube-api-access-rwr8l\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.224976 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltprd\" (UniqueName: \"kubernetes.io/projected/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-kube-api-access-ltprd\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.225097 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.225386 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-config-data\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.225486 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.225583 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.225733 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9622f\" (UniqueName: \"kubernetes.io/projected/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-kube-api-access-9622f\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.225846 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-config-data\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.225930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.235831 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-config-data\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.244657 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.244876 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.251007 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.252813 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwr8l\" (UniqueName: \"kubernetes.io/projected/71b056ea-ae53-487f-a251-e4bba40fa78d-kube-api-access-rwr8l\") pod \"nova-cell1-novncproxy-0\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.256603 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9622f\" (UniqueName: \"kubernetes.io/projected/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-kube-api-access-9622f\") pod \"nova-scheduler-0\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.267609 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jnwhl"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.270043 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.308523 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jnwhl"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.327711 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-config-data\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.327923 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-logs\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.327984 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltprd\" (UniqueName: \"kubernetes.io/projected/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-kube-api-access-ltprd\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.328013 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.330827 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-logs\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.335884 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.335957 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-config-data\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.348699 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltprd\" (UniqueName: \"kubernetes.io/projected/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-kube-api-access-ltprd\") pod \"nova-metadata-0\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.382816 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.416543 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.431862 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.432022 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.432091 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.432225 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6nn\" (UniqueName: \"kubernetes.io/projected/e51086ce-d00f-4b91-82e5-fd207f2908b2-kube-api-access-fv6nn\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.432308 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-config\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.432353 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-svc\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.489048 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.523586 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.534897 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.534975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.535030 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6nn\" (UniqueName: \"kubernetes.io/projected/e51086ce-d00f-4b91-82e5-fd207f2908b2-kube-api-access-fv6nn\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.535068 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-config\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.535091 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-svc\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.535207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.536164 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.536219 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.536866 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.537634 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-config\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.538983 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-svc\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.558343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6nn\" (UniqueName: \"kubernetes.io/projected/e51086ce-d00f-4b91-82e5-fd207f2908b2-kube-api-access-fv6nn\") pod \"dnsmasq-dns-757b4f8459-jnwhl\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.610844 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.661966 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9287c"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.784966 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9287c" event={"ID":"df94e7f5-9c11-410b-9513-d4e3350e1d29","Type":"ContainerStarted","Data":"c96009ffdc994e4dfd2d2ed8d306807c378e38ca042659b5cc8bd58aa2918711"} Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.969641 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.986272 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tnsn2"] Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.988010 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.991307 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 18:30:09 crc kubenswrapper[4688]: I0123 18:30:09.991873 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.068123 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tnsn2"] Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.140082 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.176351 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tqs\" (UniqueName: \"kubernetes.io/projected/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-kube-api-access-54tqs\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.176704 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-config-data\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.176822 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-scripts\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.176846 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.194734 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.284728 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tqs\" (UniqueName: \"kubernetes.io/projected/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-kube-api-access-54tqs\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.284850 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-config-data\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.284999 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-scripts\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.285021 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.296849 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.317049 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-config-data\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.318822 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-scripts\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.350811 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tqs\" (UniqueName: \"kubernetes.io/projected/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-kube-api-access-54tqs\") pod \"nova-cell1-conductor-db-sync-tnsn2\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.373441 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.415120 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.548146 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jnwhl"] Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.852326 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9287c" event={"ID":"df94e7f5-9c11-410b-9513-d4e3350e1d29","Type":"ContainerStarted","Data":"35b77bb28805e9d7ad9b70aa1149b6d40234a7736a5cf7a58b3f6f80d6e940c7"} Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.864096 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d47f4f4-8deb-4fbd-adad-1d248828b475","Type":"ContainerStarted","Data":"b077f3e69af1fbe808d9fd738614d02061515eb2a53513e42e19137efa36400d"} Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.866908 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f","Type":"ContainerStarted","Data":"61ea0194459be68adac71cbee83b912e9e6503de96d25dcc8cf1b8731b3d5daf"} Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.868104 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fa5e019-5d26-4fa6-a9c0-a620b15e123d","Type":"ContainerStarted","Data":"5ceb937617df946015b24aeab12fe85f6a08861636fd06a7fed98e5af10d2aa2"} Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.882907 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" event={"ID":"e51086ce-d00f-4b91-82e5-fd207f2908b2","Type":"ContainerStarted","Data":"3a581487d4659a88eeedf1914114e9c84fc35229fbf11b2e11c7151f063226ae"} Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.895069 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"71b056ea-ae53-487f-a251-e4bba40fa78d","Type":"ContainerStarted","Data":"bb6d00c16fee720cf28a6c0b85590d36f2380c718d46a435306459f8c337ac08"} Jan 23 18:30:10 crc kubenswrapper[4688]: I0123 18:30:10.905371 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9287c" podStartSLOduration=2.90531694 podStartE2EDuration="2.90531694s" podCreationTimestamp="2026-01-23 18:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:10.882722213 +0000 UTC m=+1405.878546664" watchObservedRunningTime="2026-01-23 18:30:10.90531694 +0000 UTC m=+1405.901141411" Jan 23 18:30:11 crc kubenswrapper[4688]: I0123 18:30:11.171882 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tnsn2"] Jan 23 18:30:11 crc kubenswrapper[4688]: I0123 18:30:11.918825 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" event={"ID":"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab","Type":"ContainerStarted","Data":"de483ea2cf0508da8a24bfa7431659d9cdf99e46759873822d340d4bef3be1b8"} Jan 23 18:30:11 crc kubenswrapper[4688]: I0123 18:30:11.919136 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" event={"ID":"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab","Type":"ContainerStarted","Data":"8de496070dc8f31386bc39145517452a2f432e0d9c4fa3e17649d1fbab2e63bc"} Jan 23 18:30:11 crc kubenswrapper[4688]: I0123 18:30:11.923372 4688 generic.go:334] "Generic (PLEG): container finished" podID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerID="e859d0602875c9b964880e05540c15c7c13112b533fe9ced71cb67a216bd2234" exitCode=0 Jan 23 18:30:11 crc kubenswrapper[4688]: I0123 18:30:11.923549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" event={"ID":"e51086ce-d00f-4b91-82e5-fd207f2908b2","Type":"ContainerDied","Data":"e859d0602875c9b964880e05540c15c7c13112b533fe9ced71cb67a216bd2234"} Jan 23 18:30:11 crc kubenswrapper[4688]: I0123 18:30:11.944732 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" podStartSLOduration=2.944706358 podStartE2EDuration="2.944706358s" podCreationTimestamp="2026-01-23 18:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:11.94127146 +0000 UTC m=+1406.937095901" watchObservedRunningTime="2026-01-23 18:30:11.944706358 +0000 UTC m=+1406.940530799" Jan 23 18:30:12 crc kubenswrapper[4688]: I0123 18:30:12.603484 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:12 crc kubenswrapper[4688]: I0123 18:30:12.678168 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:12 crc kubenswrapper[4688]: I0123 18:30:12.973890 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" event={"ID":"e51086ce-d00f-4b91-82e5-fd207f2908b2","Type":"ContainerStarted","Data":"45fad2ae8c5a88a7d0f0ceb2499f666e9eb96a0ecfcd0bcee87664b75350d2d8"} Jan 23 18:30:13 crc kubenswrapper[4688]: I0123 18:30:13.005350 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" podStartSLOduration=4.005326923 podStartE2EDuration="4.005326923s" podCreationTimestamp="2026-01-23 18:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:13.002949025 +0000 UTC m=+1407.998773476" watchObservedRunningTime="2026-01-23 18:30:13.005326923 +0000 UTC m=+1408.001151354" Jan 23 18:30:13 crc kubenswrapper[4688]: I0123 18:30:13.983785 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.007138 4688 generic.go:334] "Generic (PLEG): container finished" podID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerID="f83895854bacf2798dc3dc8ac4b2a50c9ea0930b9527f30a323ef71f1d6f96e2" exitCode=137 Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.007201 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerDied","Data":"f83895854bacf2798dc3dc8ac4b2a50c9ea0930b9527f30a323ef71f1d6f96e2"} Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.231375 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.231729 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="2592fa6b-08d5-4d04-bc61-aa69d8aeef52" containerName="kube-state-metrics" containerID="cri-o://051ed7968e6fd61b3718018de4019cf76ee819bb9d22aa2c7daa44a1adf025cc" gracePeriod=30 Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.414102 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.467910 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-tls-certs\") pod \"d7828699-c881-4ed8-a26a-9837e4dbb301\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.467976 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-combined-ca-bundle\") pod \"d7828699-c881-4ed8-a26a-9837e4dbb301\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.468157 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-secret-key\") pod \"d7828699-c881-4ed8-a26a-9837e4dbb301\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.468228 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-scripts\") pod \"d7828699-c881-4ed8-a26a-9837e4dbb301\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.468360 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6dm4\" (UniqueName: \"kubernetes.io/projected/d7828699-c881-4ed8-a26a-9837e4dbb301-kube-api-access-c6dm4\") pod \"d7828699-c881-4ed8-a26a-9837e4dbb301\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.468419 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-config-data\") pod \"d7828699-c881-4ed8-a26a-9837e4dbb301\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.468469 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7828699-c881-4ed8-a26a-9837e4dbb301-logs\") pod \"d7828699-c881-4ed8-a26a-9837e4dbb301\" (UID: \"d7828699-c881-4ed8-a26a-9837e4dbb301\") " Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.481666 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7828699-c881-4ed8-a26a-9837e4dbb301-logs" (OuterVolumeSpecName: "logs") pod "d7828699-c881-4ed8-a26a-9837e4dbb301" (UID: "d7828699-c881-4ed8-a26a-9837e4dbb301"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.500934 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d7828699-c881-4ed8-a26a-9837e4dbb301" (UID: "d7828699-c881-4ed8-a26a-9837e4dbb301"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.510350 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.510398 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7828699-c881-4ed8-a26a-9837e4dbb301-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.519184 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7828699-c881-4ed8-a26a-9837e4dbb301-kube-api-access-c6dm4" (OuterVolumeSpecName: "kube-api-access-c6dm4") pod "d7828699-c881-4ed8-a26a-9837e4dbb301" (UID: "d7828699-c881-4ed8-a26a-9837e4dbb301"). InnerVolumeSpecName "kube-api-access-c6dm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.614870 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6dm4\" (UniqueName: \"kubernetes.io/projected/d7828699-c881-4ed8-a26a-9837e4dbb301-kube-api-access-c6dm4\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.641407 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-scripts" (OuterVolumeSpecName: "scripts") pod "d7828699-c881-4ed8-a26a-9837e4dbb301" (UID: "d7828699-c881-4ed8-a26a-9837e4dbb301"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.679794 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-config-data" (OuterVolumeSpecName: "config-data") pod "d7828699-c881-4ed8-a26a-9837e4dbb301" (UID: "d7828699-c881-4ed8-a26a-9837e4dbb301"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.709151 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7828699-c881-4ed8-a26a-9837e4dbb301" (UID: "d7828699-c881-4ed8-a26a-9837e4dbb301"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.718571 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.718641 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.718663 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7828699-c881-4ed8-a26a-9837e4dbb301-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.863074 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d7828699-c881-4ed8-a26a-9837e4dbb301" (UID: "d7828699-c881-4ed8-a26a-9837e4dbb301"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:15 crc kubenswrapper[4688]: I0123 18:30:15.922768 4688 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7828699-c881-4ed8-a26a-9837e4dbb301-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.030576 4688 generic.go:334] "Generic (PLEG): container finished" podID="2592fa6b-08d5-4d04-bc61-aa69d8aeef52" containerID="051ed7968e6fd61b3718018de4019cf76ee819bb9d22aa2c7daa44a1adf025cc" exitCode=2 Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.030670 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2592fa6b-08d5-4d04-bc61-aa69d8aeef52","Type":"ContainerDied","Data":"051ed7968e6fd61b3718018de4019cf76ee819bb9d22aa2c7daa44a1adf025cc"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.030702 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"2592fa6b-08d5-4d04-bc61-aa69d8aeef52","Type":"ContainerDied","Data":"2384d316add0c2ee8bddf5c0299a1116b2b5aaaa9e664fe635a7e3a9292166c2"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.030716 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2384d316add0c2ee8bddf5c0299a1116b2b5aaaa9e664fe635a7e3a9292166c2" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.043160 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.056737 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-log" containerID="cri-o://f99fecb15d33c7306b4f4588c096a41adb3d842f6a66fdfdffb1cc84078e03d3" gracePeriod=30 Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.057087 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fa5e019-5d26-4fa6-a9c0-a620b15e123d","Type":"ContainerStarted","Data":"5f37432c05caf252e571b83626eba8a1a10930382bc31901ae88aa7bba942c02"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.057127 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fa5e019-5d26-4fa6-a9c0-a620b15e123d","Type":"ContainerStarted","Data":"f99fecb15d33c7306b4f4588c096a41adb3d842f6a66fdfdffb1cc84078e03d3"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.057211 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-metadata" containerID="cri-o://5f37432c05caf252e571b83626eba8a1a10930382bc31901ae88aa7bba942c02" gracePeriod=30 Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.070617 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c854fbb9b-lr4lr" event={"ID":"d7828699-c881-4ed8-a26a-9837e4dbb301","Type":"ContainerDied","Data":"02f647b7d0d399af3a20071246d814ce19fddde727ee70622afd7e5a3eacf830"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.070860 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c854fbb9b-lr4lr" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.072525 4688 scope.go:117] "RemoveContainer" containerID="334fe52ece4f91dd7ce55d73d8d16cb635250937aea94cfccd2aa29041b1f9e8" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.080366 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"71b056ea-ae53-487f-a251-e4bba40fa78d","Type":"ContainerStarted","Data":"81c071448a3c95bd450cef3762419a33f0274774666dd45cf03431eb2cd81eb6"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.080545 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="71b056ea-ae53-487f-a251-e4bba40fa78d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://81c071448a3c95bd450cef3762419a33f0274774666dd45cf03431eb2cd81eb6" gracePeriod=30 Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.128918 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7czjf\" (UniqueName: \"kubernetes.io/projected/2592fa6b-08d5-4d04-bc61-aa69d8aeef52-kube-api-access-7czjf\") pod \"2592fa6b-08d5-4d04-bc61-aa69d8aeef52\" (UID: \"2592fa6b-08d5-4d04-bc61-aa69d8aeef52\") " Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.137607 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2592fa6b-08d5-4d04-bc61-aa69d8aeef52-kube-api-access-7czjf" (OuterVolumeSpecName: "kube-api-access-7czjf") pod "2592fa6b-08d5-4d04-bc61-aa69d8aeef52" (UID: "2592fa6b-08d5-4d04-bc61-aa69d8aeef52"). InnerVolumeSpecName "kube-api-access-7czjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.137650 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d47f4f4-8deb-4fbd-adad-1d248828b475","Type":"ContainerStarted","Data":"7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.211643 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f","Type":"ContainerStarted","Data":"0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220"} Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.220544 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.743997553 podStartE2EDuration="8.220491942s" podCreationTimestamp="2026-01-23 18:30:08 +0000 UTC" firstStartedPulling="2026-01-23 18:30:10.410430206 +0000 UTC m=+1405.406254647" lastFinishedPulling="2026-01-23 18:30:14.886924585 +0000 UTC m=+1409.882749036" observedRunningTime="2026-01-23 18:30:16.125762101 +0000 UTC m=+1411.121586562" watchObservedRunningTime="2026-01-23 18:30:16.220491942 +0000 UTC m=+1411.216316393" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.227035 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.497626671 podStartE2EDuration="8.227006078s" podCreationTimestamp="2026-01-23 18:30:08 +0000 UTC" firstStartedPulling="2026-01-23 18:30:10.160344278 +0000 UTC m=+1405.156168709" lastFinishedPulling="2026-01-23 18:30:14.889723675 +0000 UTC m=+1409.885548116" observedRunningTime="2026-01-23 18:30:16.152716222 +0000 UTC m=+1411.148540663" watchObservedRunningTime="2026-01-23 18:30:16.227006078 +0000 UTC m=+1411.222830519" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.232313 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7czjf\" (UniqueName: \"kubernetes.io/projected/2592fa6b-08d5-4d04-bc61-aa69d8aeef52-kube-api-access-7czjf\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.254674 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c854fbb9b-lr4lr"] Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.264386 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c854fbb9b-lr4lr"] Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.281765 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.580442152 podStartE2EDuration="8.281739185s" podCreationTimestamp="2026-01-23 18:30:08 +0000 UTC" firstStartedPulling="2026-01-23 18:30:10.186499907 +0000 UTC m=+1405.182324348" lastFinishedPulling="2026-01-23 18:30:14.88779694 +0000 UTC m=+1409.883621381" observedRunningTime="2026-01-23 18:30:16.254269479 +0000 UTC m=+1411.250093940" watchObservedRunningTime="2026-01-23 18:30:16.281739185 +0000 UTC m=+1411.277563626" Jan 23 18:30:16 crc kubenswrapper[4688]: I0123 18:30:16.558372 4688 scope.go:117] "RemoveContainer" containerID="f83895854bacf2798dc3dc8ac4b2a50c9ea0930b9527f30a323ef71f1d6f96e2" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.245846 4688 generic.go:334] "Generic (PLEG): container finished" podID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerID="f99fecb15d33c7306b4f4588c096a41adb3d842f6a66fdfdffb1cc84078e03d3" exitCode=143 Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.245952 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fa5e019-5d26-4fa6-a9c0-a620b15e123d","Type":"ContainerDied","Data":"f99fecb15d33c7306b4f4588c096a41adb3d842f6a66fdfdffb1cc84078e03d3"} Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.252719 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.252846 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d47f4f4-8deb-4fbd-adad-1d248828b475","Type":"ContainerStarted","Data":"9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a"} Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.283789 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.208335213 podStartE2EDuration="9.283764764s" podCreationTimestamp="2026-01-23 18:30:08 +0000 UTC" firstStartedPulling="2026-01-23 18:30:09.995401828 +0000 UTC m=+1404.991226269" lastFinishedPulling="2026-01-23 18:30:15.070831379 +0000 UTC m=+1410.066655820" observedRunningTime="2026-01-23 18:30:17.270988668 +0000 UTC m=+1412.266813109" watchObservedRunningTime="2026-01-23 18:30:17.283764764 +0000 UTC m=+1412.279589205" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.323343 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.350478 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.384703 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2592fa6b-08d5-4d04-bc61-aa69d8aeef52" path="/var/lib/kubelet/pods/2592fa6b-08d5-4d04-bc61-aa69d8aeef52/volumes" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.385467 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" path="/var/lib/kubelet/pods/d7828699-c881-4ed8-a26a-9837e4dbb301/volumes" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.386261 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:30:17 crc kubenswrapper[4688]: E0123 18:30:17.386677 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon-log" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.386697 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon-log" Jan 23 18:30:17 crc kubenswrapper[4688]: E0123 18:30:17.386721 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2592fa6b-08d5-4d04-bc61-aa69d8aeef52" containerName="kube-state-metrics" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.386728 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2592fa6b-08d5-4d04-bc61-aa69d8aeef52" containerName="kube-state-metrics" Jan 23 18:30:17 crc kubenswrapper[4688]: E0123 18:30:17.386753 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.386762 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.387048 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.387072 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon-log" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.387098 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2592fa6b-08d5-4d04-bc61-aa69d8aeef52" containerName="kube-state-metrics" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.388573 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.391595 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.392236 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.392616 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.462032 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.462229 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.462324 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.462514 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs8nh\" (UniqueName: \"kubernetes.io/projected/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-api-access-zs8nh\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.564130 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.564572 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.564724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs8nh\" (UniqueName: \"kubernetes.io/projected/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-api-access-zs8nh\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.564849 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.569525 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.572173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.573003 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ee596ca-3388-41b9-9651-b0f92e4b838c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.587524 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs8nh\" (UniqueName: \"kubernetes.io/projected/9ee596ca-3388-41b9-9651-b0f92e4b838c-kube-api-access-zs8nh\") pod \"kube-state-metrics-0\" (UID: \"9ee596ca-3388-41b9-9651-b0f92e4b838c\") " pod="openstack/kube-state-metrics-0" Jan 23 18:30:17 crc kubenswrapper[4688]: I0123 18:30:17.711117 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.026239 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.026999 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-central-agent" containerID="cri-o://c8ff82b33f5bd37b332f1c3459cb069d7b4168296ca70c1df2ae60a368154838" gracePeriod=30 Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.027141 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-notification-agent" containerID="cri-o://6fc764cf04af30997a744e38bf48611aa9e87a7dfea300ea24116faceb5d68f4" gracePeriod=30 Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.027130 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="sg-core" containerID="cri-o://eef0ce3d3be20f70f3950b4e285000da3bf251b24bb3c7dd9d4130ce65886c21" gracePeriod=30 Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.027427 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="proxy-httpd" containerID="cri-o://fc2f561acdd594915315147ec57ca207e3eb6e6985c8e305f3b32bb741593feb" gracePeriod=30 Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.239448 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.279033 4688 generic.go:334] "Generic (PLEG): container finished" podID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerID="fc2f561acdd594915315147ec57ca207e3eb6e6985c8e305f3b32bb741593feb" exitCode=0 Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.280088 4688 generic.go:334] "Generic (PLEG): container finished" podID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerID="eef0ce3d3be20f70f3950b4e285000da3bf251b24bb3c7dd9d4130ce65886c21" exitCode=2 Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.279300 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerDied","Data":"fc2f561acdd594915315147ec57ca207e3eb6e6985c8e305f3b32bb741593feb"} Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.280374 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerDied","Data":"eef0ce3d3be20f70f3950b4e285000da3bf251b24bb3c7dd9d4130ce65886c21"} Jan 23 18:30:18 crc kubenswrapper[4688]: I0123 18:30:18.283287 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ee596ca-3388-41b9-9651-b0f92e4b838c","Type":"ContainerStarted","Data":"e161c7cbd842db083d30bd759404b557cf87f85f3cd6528c0bef9e80c481ba6d"} Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.310314 4688 generic.go:334] "Generic (PLEG): container finished" podID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerID="6fc764cf04af30997a744e38bf48611aa9e87a7dfea300ea24116faceb5d68f4" exitCode=0 Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.310728 4688 generic.go:334] "Generic (PLEG): container finished" podID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerID="c8ff82b33f5bd37b332f1c3459cb069d7b4168296ca70c1df2ae60a368154838" exitCode=0 Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.310585 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerDied","Data":"6fc764cf04af30997a744e38bf48611aa9e87a7dfea300ea24116faceb5d68f4"} Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.312104 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerDied","Data":"c8ff82b33f5bd37b332f1c3459cb069d7b4168296ca70c1df2ae60a368154838"} Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.385377 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.385719 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.417457 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.489388 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.489445 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.524162 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.524231 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.553546 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.612327 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.710005 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cj4zt"] Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.710575 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" podUID="25533537-7bbc-4377-8701-d21ec7b1f226" containerName="dnsmasq-dns" containerID="cri-o://2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c" gracePeriod=10 Jan 23 18:30:19 crc kubenswrapper[4688]: I0123 18:30:19.976395 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.036692 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-run-httpd\") pod \"b458ac0e-0717-485d-8665-e46e63bdc1bd\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.036966 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-sg-core-conf-yaml\") pod \"b458ac0e-0717-485d-8665-e46e63bdc1bd\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.037025 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-scripts\") pod \"b458ac0e-0717-485d-8665-e46e63bdc1bd\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.037403 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b458ac0e-0717-485d-8665-e46e63bdc1bd" (UID: "b458ac0e-0717-485d-8665-e46e63bdc1bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.037686 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-log-httpd\") pod \"b458ac0e-0717-485d-8665-e46e63bdc1bd\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.037822 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-combined-ca-bundle\") pod \"b458ac0e-0717-485d-8665-e46e63bdc1bd\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.037877 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96nhg\" (UniqueName: \"kubernetes.io/projected/b458ac0e-0717-485d-8665-e46e63bdc1bd-kube-api-access-96nhg\") pod \"b458ac0e-0717-485d-8665-e46e63bdc1bd\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.037915 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-config-data\") pod \"b458ac0e-0717-485d-8665-e46e63bdc1bd\" (UID: \"b458ac0e-0717-485d-8665-e46e63bdc1bd\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.038688 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.039889 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b458ac0e-0717-485d-8665-e46e63bdc1bd" (UID: "b458ac0e-0717-485d-8665-e46e63bdc1bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.050495 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-scripts" (OuterVolumeSpecName: "scripts") pod "b458ac0e-0717-485d-8665-e46e63bdc1bd" (UID: "b458ac0e-0717-485d-8665-e46e63bdc1bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.082382 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b458ac0e-0717-485d-8665-e46e63bdc1bd" (UID: "b458ac0e-0717-485d-8665-e46e63bdc1bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.138918 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b458ac0e-0717-485d-8665-e46e63bdc1bd-kube-api-access-96nhg" (OuterVolumeSpecName: "kube-api-access-96nhg") pod "b458ac0e-0717-485d-8665-e46e63bdc1bd" (UID: "b458ac0e-0717-485d-8665-e46e63bdc1bd"). InnerVolumeSpecName "kube-api-access-96nhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.142761 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.143016 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.143026 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b458ac0e-0717-485d-8665-e46e63bdc1bd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.143035 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96nhg\" (UniqueName: \"kubernetes.io/projected/b458ac0e-0717-485d-8665-e46e63bdc1bd-kube-api-access-96nhg\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.217488 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b458ac0e-0717-485d-8665-e46e63bdc1bd" (UID: "b458ac0e-0717-485d-8665-e46e63bdc1bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.217729 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.233221 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-config-data" (OuterVolumeSpecName: "config-data") pod "b458ac0e-0717-485d-8665-e46e63bdc1bd" (UID: "b458ac0e-0717-485d-8665-e46e63bdc1bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.249421 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tc6k\" (UniqueName: \"kubernetes.io/projected/25533537-7bbc-4377-8701-d21ec7b1f226-kube-api-access-4tc6k\") pod \"25533537-7bbc-4377-8701-d21ec7b1f226\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.249585 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-config\") pod \"25533537-7bbc-4377-8701-d21ec7b1f226\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.249633 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-nb\") pod \"25533537-7bbc-4377-8701-d21ec7b1f226\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.249690 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-sb\") pod \"25533537-7bbc-4377-8701-d21ec7b1f226\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.249815 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-svc\") pod \"25533537-7bbc-4377-8701-d21ec7b1f226\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.249860 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-swift-storage-0\") pod \"25533537-7bbc-4377-8701-d21ec7b1f226\" (UID: \"25533537-7bbc-4377-8701-d21ec7b1f226\") " Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.250379 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.250399 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b458ac0e-0717-485d-8665-e46e63bdc1bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.255855 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25533537-7bbc-4377-8701-d21ec7b1f226-kube-api-access-4tc6k" (OuterVolumeSpecName: "kube-api-access-4tc6k") pod "25533537-7bbc-4377-8701-d21ec7b1f226" (UID: "25533537-7bbc-4377-8701-d21ec7b1f226"). InnerVolumeSpecName "kube-api-access-4tc6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.329092 4688 generic.go:334] "Generic (PLEG): container finished" podID="25533537-7bbc-4377-8701-d21ec7b1f226" containerID="2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c" exitCode=0 Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.329175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" event={"ID":"25533537-7bbc-4377-8701-d21ec7b1f226","Type":"ContainerDied","Data":"2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c"} Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.329380 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" event={"ID":"25533537-7bbc-4377-8701-d21ec7b1f226","Type":"ContainerDied","Data":"8918b521e3a27f4ac464299b296807604fdd8c69919420772d81f0004be3ea99"} Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.329399 4688 scope.go:117] "RemoveContainer" containerID="2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.329510 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-cj4zt" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.341452 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25533537-7bbc-4377-8701-d21ec7b1f226" (UID: "25533537-7bbc-4377-8701-d21ec7b1f226"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.341879 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25533537-7bbc-4377-8701-d21ec7b1f226" (UID: "25533537-7bbc-4377-8701-d21ec7b1f226"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.342085 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ee596ca-3388-41b9-9651-b0f92e4b838c","Type":"ContainerStarted","Data":"406d40544c00f6023c0ed44be620fff6683cd533bd2c9efff485868c28936cb0"} Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.342129 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.357760 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.357854 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tc6k\" (UniqueName: \"kubernetes.io/projected/25533537-7bbc-4377-8701-d21ec7b1f226-kube-api-access-4tc6k\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.357868 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.359056 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.361786 4688 scope.go:117] "RemoveContainer" containerID="ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.362133 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b458ac0e-0717-485d-8665-e46e63bdc1bd","Type":"ContainerDied","Data":"afde590cf43eefe3989f33ad811e2893d8c5c7c5abdc5eabfbb67da33a76c11f"} Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.366732 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-config" (OuterVolumeSpecName: "config") pod "25533537-7bbc-4377-8701-d21ec7b1f226" (UID: "25533537-7bbc-4377-8701-d21ec7b1f226"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.380015 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25533537-7bbc-4377-8701-d21ec7b1f226" (UID: "25533537-7bbc-4377-8701-d21ec7b1f226"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.396464 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.334922199 podStartE2EDuration="3.396437849s" podCreationTimestamp="2026-01-23 18:30:17 +0000 UTC" firstStartedPulling="2026-01-23 18:30:18.241413712 +0000 UTC m=+1413.237238153" lastFinishedPulling="2026-01-23 18:30:19.302929362 +0000 UTC m=+1414.298753803" observedRunningTime="2026-01-23 18:30:20.371452924 +0000 UTC m=+1415.367277365" watchObservedRunningTime="2026-01-23 18:30:20.396437849 +0000 UTC m=+1415.392262290" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.401945 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.404685 4688 scope.go:117] "RemoveContainer" containerID="2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.405506 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c\": container with ID starting with 2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c not found: ID does not exist" containerID="2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.405537 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c"} err="failed to get container status \"2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c\": rpc error: code = NotFound desc = could not find container \"2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c\": container with ID starting with 2fcd4239b845ef6e622f5ce1634023f7885fb2d13e3f2cebf7b7460c8cb3dc3c not found: ID does not exist" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.406174 4688 scope.go:117] "RemoveContainer" containerID="ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.406701 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c\": container with ID starting with ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c not found: ID does not exist" containerID="ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.406783 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c"} err="failed to get container status \"ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c\": rpc error: code = NotFound desc = could not find container \"ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c\": container with ID starting with ae7ecb1339ed00a5f62abeb5f2186d55d00246d0aeabf0364b543dbe5d4aa09c not found: ID does not exist" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.406852 4688 scope.go:117] "RemoveContainer" containerID="fc2f561acdd594915315147ec57ca207e3eb6e6985c8e305f3b32bb741593feb" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.432520 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25533537-7bbc-4377-8701-d21ec7b1f226" (UID: "25533537-7bbc-4377-8701-d21ec7b1f226"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.460992 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.461023 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.461035 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25533537-7bbc-4377-8701-d21ec7b1f226-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.464091 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.464402 4688 scope.go:117] "RemoveContainer" containerID="eef0ce3d3be20f70f3950b4e285000da3bf251b24bb3c7dd9d4130ce65886c21" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.475267 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.475608 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.483074 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.495517 4688 scope.go:117] "RemoveContainer" containerID="6fc764cf04af30997a744e38bf48611aa9e87a7dfea300ea24116faceb5d68f4" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.507336 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.507883 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.507902 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.507915 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="proxy-httpd" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.507921 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="proxy-httpd" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.507939 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25533537-7bbc-4377-8701-d21ec7b1f226" containerName="dnsmasq-dns" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.507946 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="25533537-7bbc-4377-8701-d21ec7b1f226" containerName="dnsmasq-dns" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.507965 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-central-agent" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.507973 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-central-agent" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.507993 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-notification-agent" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508001 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-notification-agent" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.508029 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="sg-core" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508035 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="sg-core" Jan 23 18:30:20 crc kubenswrapper[4688]: E0123 18:30:20.508045 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25533537-7bbc-4377-8701-d21ec7b1f226" containerName="init" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508054 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="25533537-7bbc-4377-8701-d21ec7b1f226" containerName="init" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508286 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="sg-core" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508303 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-central-agent" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508314 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="proxy-httpd" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508328 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7828699-c881-4ed8-a26a-9837e4dbb301" containerName="horizon" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508340 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" containerName="ceilometer-notification-agent" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.508356 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="25533537-7bbc-4377-8701-d21ec7b1f226" containerName="dnsmasq-dns" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.510536 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.512605 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.513614 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.513858 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.521626 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.528254 4688 scope.go:117] "RemoveContainer" containerID="c8ff82b33f5bd37b332f1c3459cb069d7b4168296ca70c1df2ae60a368154838" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.562765 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.562828 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.562879 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-log-httpd\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.562952 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-config-data\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.562989 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-scripts\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.563005 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n72zb\" (UniqueName: \"kubernetes.io/projected/769d6526-d580-49cb-9c8b-01443462469d-kube-api-access-n72zb\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.563078 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.563165 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-run-httpd\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.665843 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-scripts\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.665907 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n72zb\" (UniqueName: \"kubernetes.io/projected/769d6526-d580-49cb-9c8b-01443462469d-kube-api-access-n72zb\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.665989 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.666069 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-run-httpd\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.666127 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.666166 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.666260 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-log-httpd\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.666345 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-config-data\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.667636 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-log-httpd\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.667786 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-run-httpd\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.672744 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-scripts\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.675564 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.679094 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-config-data\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.680038 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.687466 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n72zb\" (UniqueName: \"kubernetes.io/projected/769d6526-d580-49cb-9c8b-01443462469d-kube-api-access-n72zb\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.699897 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.834528 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cj4zt"] Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.836673 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:20 crc kubenswrapper[4688]: I0123 18:30:20.847698 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-cj4zt"] Jan 23 18:30:21 crc kubenswrapper[4688]: I0123 18:30:21.371583 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25533537-7bbc-4377-8701-d21ec7b1f226" path="/var/lib/kubelet/pods/25533537-7bbc-4377-8701-d21ec7b1f226/volumes" Jan 23 18:30:21 crc kubenswrapper[4688]: I0123 18:30:21.372356 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b458ac0e-0717-485d-8665-e46e63bdc1bd" path="/var/lib/kubelet/pods/b458ac0e-0717-485d-8665-e46e63bdc1bd/volumes" Jan 23 18:30:21 crc kubenswrapper[4688]: W0123 18:30:21.375235 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod769d6526_d580_49cb_9c8b_01443462469d.slice/crio-5cc36be6d74582169dd764f7c382c95556902b8a71635b528595763263f89822 WatchSource:0}: Error finding container 5cc36be6d74582169dd764f7c382c95556902b8a71635b528595763263f89822: Status 404 returned error can't find the container with id 5cc36be6d74582169dd764f7c382c95556902b8a71635b528595763263f89822 Jan 23 18:30:21 crc kubenswrapper[4688]: I0123 18:30:21.396170 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:21 crc kubenswrapper[4688]: I0123 18:30:21.417660 4688 generic.go:334] "Generic (PLEG): container finished" podID="df94e7f5-9c11-410b-9513-d4e3350e1d29" containerID="35b77bb28805e9d7ad9b70aa1149b6d40234a7736a5cf7a58b3f6f80d6e940c7" exitCode=0 Jan 23 18:30:21 crc kubenswrapper[4688]: I0123 18:30:21.417749 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9287c" event={"ID":"df94e7f5-9c11-410b-9513-d4e3350e1d29","Type":"ContainerDied","Data":"35b77bb28805e9d7ad9b70aa1149b6d40234a7736a5cf7a58b3f6f80d6e940c7"} Jan 23 18:30:22 crc kubenswrapper[4688]: I0123 18:30:22.433839 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerStarted","Data":"5cc36be6d74582169dd764f7c382c95556902b8a71635b528595763263f89822"} Jan 23 18:30:22 crc kubenswrapper[4688]: I0123 18:30:22.968414 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.043346 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-config-data\") pod \"df94e7f5-9c11-410b-9513-d4e3350e1d29\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.043458 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-scripts\") pod \"df94e7f5-9c11-410b-9513-d4e3350e1d29\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.043548 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjbq4\" (UniqueName: \"kubernetes.io/projected/df94e7f5-9c11-410b-9513-d4e3350e1d29-kube-api-access-kjbq4\") pod \"df94e7f5-9c11-410b-9513-d4e3350e1d29\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.043630 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-combined-ca-bundle\") pod \"df94e7f5-9c11-410b-9513-d4e3350e1d29\" (UID: \"df94e7f5-9c11-410b-9513-d4e3350e1d29\") " Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.049971 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df94e7f5-9c11-410b-9513-d4e3350e1d29-kube-api-access-kjbq4" (OuterVolumeSpecName: "kube-api-access-kjbq4") pod "df94e7f5-9c11-410b-9513-d4e3350e1d29" (UID: "df94e7f5-9c11-410b-9513-d4e3350e1d29"). InnerVolumeSpecName "kube-api-access-kjbq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.050016 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-scripts" (OuterVolumeSpecName: "scripts") pod "df94e7f5-9c11-410b-9513-d4e3350e1d29" (UID: "df94e7f5-9c11-410b-9513-d4e3350e1d29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.078151 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df94e7f5-9c11-410b-9513-d4e3350e1d29" (UID: "df94e7f5-9c11-410b-9513-d4e3350e1d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.078842 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-config-data" (OuterVolumeSpecName: "config-data") pod "df94e7f5-9c11-410b-9513-d4e3350e1d29" (UID: "df94e7f5-9c11-410b-9513-d4e3350e1d29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.165845 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.165886 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.165902 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjbq4\" (UniqueName: \"kubernetes.io/projected/df94e7f5-9c11-410b-9513-d4e3350e1d29-kube-api-access-kjbq4\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.165914 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df94e7f5-9c11-410b-9513-d4e3350e1d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.448454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9287c" event={"ID":"df94e7f5-9c11-410b-9513-d4e3350e1d29","Type":"ContainerDied","Data":"c96009ffdc994e4dfd2d2ed8d306807c378e38ca042659b5cc8bd58aa2918711"} Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.448507 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96009ffdc994e4dfd2d2ed8d306807c378e38ca042659b5cc8bd58aa2918711" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.448587 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9287c" Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.450304 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerStarted","Data":"9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de"} Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.640179 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.640879 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-log" containerID="cri-o://7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8" gracePeriod=30 Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.640933 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-api" containerID="cri-o://9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a" gracePeriod=30 Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.653713 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:23 crc kubenswrapper[4688]: I0123 18:30:23.654064 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" containerName="nova-scheduler-scheduler" containerID="cri-o://0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" gracePeriod=30 Jan 23 18:30:24 crc kubenswrapper[4688]: I0123 18:30:24.463884 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerStarted","Data":"2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509"} Jan 23 18:30:24 crc kubenswrapper[4688]: I0123 18:30:24.468010 4688 generic.go:334] "Generic (PLEG): container finished" podID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerID="7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8" exitCode=143 Jan 23 18:30:24 crc kubenswrapper[4688]: I0123 18:30:24.468063 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d47f4f4-8deb-4fbd-adad-1d248828b475","Type":"ContainerDied","Data":"7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8"} Jan 23 18:30:24 crc kubenswrapper[4688]: E0123 18:30:24.489845 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220 is running failed: container process not found" containerID="0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:30:24 crc kubenswrapper[4688]: E0123 18:30:24.490518 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220 is running failed: container process not found" containerID="0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:30:24 crc kubenswrapper[4688]: E0123 18:30:24.490836 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220 is running failed: container process not found" containerID="0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:30:24 crc kubenswrapper[4688]: E0123 18:30:24.490883 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" containerName="nova-scheduler-scheduler" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.050537 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.109312 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-combined-ca-bundle\") pod \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.109709 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-config-data\") pod \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.109996 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9622f\" (UniqueName: \"kubernetes.io/projected/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-kube-api-access-9622f\") pod \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\" (UID: \"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f\") " Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.117456 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-kube-api-access-9622f" (OuterVolumeSpecName: "kube-api-access-9622f") pod "9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" (UID: "9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f"). InnerVolumeSpecName "kube-api-access-9622f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.145354 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-config-data" (OuterVolumeSpecName: "config-data") pod "9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" (UID: "9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.148081 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" (UID: "9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.212543 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9622f\" (UniqueName: \"kubernetes.io/projected/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-kube-api-access-9622f\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.212586 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.212606 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.481026 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerStarted","Data":"bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1"} Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.484048 4688 generic.go:334] "Generic (PLEG): container finished" podID="9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" containerID="0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" exitCode=0 Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.484081 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.484100 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f","Type":"ContainerDied","Data":"0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220"} Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.484460 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f","Type":"ContainerDied","Data":"61ea0194459be68adac71cbee83b912e9e6503de96d25dcc8cf1b8731b3d5daf"} Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.484485 4688 scope.go:117] "RemoveContainer" containerID="0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.542771 4688 scope.go:117] "RemoveContainer" containerID="0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" Jan 23 18:30:25 crc kubenswrapper[4688]: E0123 18:30:25.543962 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220\": container with ID starting with 0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220 not found: ID does not exist" containerID="0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.544011 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220"} err="failed to get container status \"0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220\": rpc error: code = NotFound desc = could not find container \"0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220\": container with ID starting with 0f2ad397ad0dafb2c6e9bc3c4de537d4d9fd0766853c1f0a7988c5929c45b220 not found: ID does not exist" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.551220 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.573567 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.585731 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:25 crc kubenswrapper[4688]: E0123 18:30:25.586497 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" containerName="nova-scheduler-scheduler" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.586525 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" containerName="nova-scheduler-scheduler" Jan 23 18:30:25 crc kubenswrapper[4688]: E0123 18:30:25.586548 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df94e7f5-9c11-410b-9513-d4e3350e1d29" containerName="nova-manage" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.586557 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="df94e7f5-9c11-410b-9513-d4e3350e1d29" containerName="nova-manage" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.586810 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" containerName="nova-scheduler-scheduler" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.586835 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="df94e7f5-9c11-410b-9513-d4e3350e1d29" containerName="nova-manage" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.587947 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.591956 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.610849 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.724312 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpkr\" (UniqueName: \"kubernetes.io/projected/f8fc4c6b-d528-4701-8cc1-31553b942468-kube-api-access-9gpkr\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.724524 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-config-data\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.724573 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.827017 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-config-data\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.827100 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.827158 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gpkr\" (UniqueName: \"kubernetes.io/projected/f8fc4c6b-d528-4701-8cc1-31553b942468-kube-api-access-9gpkr\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.833942 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-config-data\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.835999 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.846095 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gpkr\" (UniqueName: \"kubernetes.io/projected/f8fc4c6b-d528-4701-8cc1-31553b942468-kube-api-access-9gpkr\") pod \"nova-scheduler-0\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " pod="openstack/nova-scheduler-0" Jan 23 18:30:25 crc kubenswrapper[4688]: I0123 18:30:25.921831 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:30:26 crc kubenswrapper[4688]: I0123 18:30:26.434149 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:30:26 crc kubenswrapper[4688]: I0123 18:30:26.496085 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8fc4c6b-d528-4701-8cc1-31553b942468","Type":"ContainerStarted","Data":"c8784df624a6e16a84184eafe95f3311a423eeb3573f388bd9e682e5c247f200"} Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.314940 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.375530 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f" path="/var/lib/kubelet/pods/9bcc53ab-e2f5-4b07-873e-3b6d9855dd9f/volumes" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.462338 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-combined-ca-bundle\") pod \"6d47f4f4-8deb-4fbd-adad-1d248828b475\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.462500 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz55b\" (UniqueName: \"kubernetes.io/projected/6d47f4f4-8deb-4fbd-adad-1d248828b475-kube-api-access-mz55b\") pod \"6d47f4f4-8deb-4fbd-adad-1d248828b475\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.462551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-config-data\") pod \"6d47f4f4-8deb-4fbd-adad-1d248828b475\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.462578 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d47f4f4-8deb-4fbd-adad-1d248828b475-logs\") pod \"6d47f4f4-8deb-4fbd-adad-1d248828b475\" (UID: \"6d47f4f4-8deb-4fbd-adad-1d248828b475\") " Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.468390 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d47f4f4-8deb-4fbd-adad-1d248828b475-logs" (OuterVolumeSpecName: "logs") pod "6d47f4f4-8deb-4fbd-adad-1d248828b475" (UID: "6d47f4f4-8deb-4fbd-adad-1d248828b475"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.468761 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d47f4f4-8deb-4fbd-adad-1d248828b475-kube-api-access-mz55b" (OuterVolumeSpecName: "kube-api-access-mz55b") pod "6d47f4f4-8deb-4fbd-adad-1d248828b475" (UID: "6d47f4f4-8deb-4fbd-adad-1d248828b475"). InnerVolumeSpecName "kube-api-access-mz55b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.503564 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d47f4f4-8deb-4fbd-adad-1d248828b475" (UID: "6d47f4f4-8deb-4fbd-adad-1d248828b475"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.505155 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-config-data" (OuterVolumeSpecName: "config-data") pod "6d47f4f4-8deb-4fbd-adad-1d248828b475" (UID: "6d47f4f4-8deb-4fbd-adad-1d248828b475"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.523218 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerStarted","Data":"9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee"} Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.523893 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.540132 4688 generic.go:334] "Generic (PLEG): container finished" podID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerID="9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a" exitCode=0 Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.540245 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d47f4f4-8deb-4fbd-adad-1d248828b475","Type":"ContainerDied","Data":"9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a"} Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.540282 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d47f4f4-8deb-4fbd-adad-1d248828b475","Type":"ContainerDied","Data":"b077f3e69af1fbe808d9fd738614d02061515eb2a53513e42e19137efa36400d"} Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.540317 4688 scope.go:117] "RemoveContainer" containerID="9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.541346 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.544393 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8fc4c6b-d528-4701-8cc1-31553b942468","Type":"ContainerStarted","Data":"f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53"} Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.570781 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.574284 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz55b\" (UniqueName: \"kubernetes.io/projected/6d47f4f4-8deb-4fbd-adad-1d248828b475-kube-api-access-mz55b\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.574561 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d47f4f4-8deb-4fbd-adad-1d248828b475-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.574676 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d47f4f4-8deb-4fbd-adad-1d248828b475-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.582689 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.42226792 podStartE2EDuration="7.582669382s" podCreationTimestamp="2026-01-23 18:30:20 +0000 UTC" firstStartedPulling="2026-01-23 18:30:21.377830638 +0000 UTC m=+1416.373655079" lastFinishedPulling="2026-01-23 18:30:26.5382321 +0000 UTC m=+1421.534056541" observedRunningTime="2026-01-23 18:30:27.565888652 +0000 UTC m=+1422.561713093" watchObservedRunningTime="2026-01-23 18:30:27.582669382 +0000 UTC m=+1422.578493823" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.598894 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.598868646 podStartE2EDuration="2.598868646s" podCreationTimestamp="2026-01-23 18:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:27.591265798 +0000 UTC m=+1422.587090259" watchObservedRunningTime="2026-01-23 18:30:27.598868646 +0000 UTC m=+1422.594693087" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.627992 4688 scope.go:117] "RemoveContainer" containerID="7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.645721 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.657805 4688 scope.go:117] "RemoveContainer" containerID="9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a" Jan 23 18:30:27 crc kubenswrapper[4688]: E0123 18:30:27.658334 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a\": container with ID starting with 9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a not found: ID does not exist" containerID="9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.658371 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a"} err="failed to get container status \"9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a\": rpc error: code = NotFound desc = could not find container \"9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a\": container with ID starting with 9ed0683cd12b867adc666b6283fe561ef37619bcd1a33d79ef6f7ce4bcbc083a not found: ID does not exist" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.658402 4688 scope.go:117] "RemoveContainer" containerID="7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8" Jan 23 18:30:27 crc kubenswrapper[4688]: E0123 18:30:27.658632 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8\": container with ID starting with 7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8 not found: ID does not exist" containerID="7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.658660 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8"} err="failed to get container status \"7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8\": rpc error: code = NotFound desc = could not find container \"7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8\": container with ID starting with 7a95706d423194952973c447090b1de92937b859df65d4eb312f7eebb3b8f1f8 not found: ID does not exist" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.661727 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.688155 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:27 crc kubenswrapper[4688]: E0123 18:30:27.688958 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-log" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.689073 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-log" Jan 23 18:30:27 crc kubenswrapper[4688]: E0123 18:30:27.689178 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-api" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.689266 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-api" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.689548 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-log" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.689622 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" containerName="nova-api-api" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.691352 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.694784 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.700617 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.726108 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.788108 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxw4\" (UniqueName: \"kubernetes.io/projected/c4ecc68b-c045-4370-a591-6480cc99e21e-kube-api-access-lgxw4\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.788301 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.788339 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ecc68b-c045-4370-a591-6480cc99e21e-logs\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.788524 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-config-data\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.890663 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-config-data\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.890785 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgxw4\" (UniqueName: \"kubernetes.io/projected/c4ecc68b-c045-4370-a591-6480cc99e21e-kube-api-access-lgxw4\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.890859 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.890883 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ecc68b-c045-4370-a591-6480cc99e21e-logs\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.891375 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ecc68b-c045-4370-a591-6480cc99e21e-logs\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.901017 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.901117 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-config-data\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:27 crc kubenswrapper[4688]: I0123 18:30:27.910717 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgxw4\" (UniqueName: \"kubernetes.io/projected/c4ecc68b-c045-4370-a591-6480cc99e21e-kube-api-access-lgxw4\") pod \"nova-api-0\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " pod="openstack/nova-api-0" Jan 23 18:30:28 crc kubenswrapper[4688]: I0123 18:30:28.020069 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:28 crc kubenswrapper[4688]: I0123 18:30:28.523615 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:28 crc kubenswrapper[4688]: I0123 18:30:28.562722 4688 generic.go:334] "Generic (PLEG): container finished" podID="1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" containerID="de483ea2cf0508da8a24bfa7431659d9cdf99e46759873822d340d4bef3be1b8" exitCode=0 Jan 23 18:30:28 crc kubenswrapper[4688]: I0123 18:30:28.562818 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" event={"ID":"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab","Type":"ContainerDied","Data":"de483ea2cf0508da8a24bfa7431659d9cdf99e46759873822d340d4bef3be1b8"} Jan 23 18:30:28 crc kubenswrapper[4688]: I0123 18:30:28.564661 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4ecc68b-c045-4370-a591-6480cc99e21e","Type":"ContainerStarted","Data":"f3b04eda317d05178b2e1e9bb552f37cca6d66c77278f335f92d6143a345df32"} Jan 23 18:30:29 crc kubenswrapper[4688]: I0123 18:30:29.369678 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d47f4f4-8deb-4fbd-adad-1d248828b475" path="/var/lib/kubelet/pods/6d47f4f4-8deb-4fbd-adad-1d248828b475/volumes" Jan 23 18:30:29 crc kubenswrapper[4688]: I0123 18:30:29.579544 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4ecc68b-c045-4370-a591-6480cc99e21e","Type":"ContainerStarted","Data":"cb6878210b64656c467a07c92885e2108ce19d43881e70cd4fc3cd9d35c21748"} Jan 23 18:30:29 crc kubenswrapper[4688]: I0123 18:30:29.579621 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4ecc68b-c045-4370-a591-6480cc99e21e","Type":"ContainerStarted","Data":"d96062600c38e06b39d1257f648ee17aca74d224ca25521bf5b665a67f8064d6"} Jan 23 18:30:29 crc kubenswrapper[4688]: I0123 18:30:29.619230 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6191728789999997 podStartE2EDuration="2.619172879s" podCreationTimestamp="2026-01-23 18:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:29.604923001 +0000 UTC m=+1424.600747452" watchObservedRunningTime="2026-01-23 18:30:29.619172879 +0000 UTC m=+1424.614997330" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.031605 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.152709 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-combined-ca-bundle\") pod \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.153006 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-config-data\") pod \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.153045 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-scripts\") pod \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.153199 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54tqs\" (UniqueName: \"kubernetes.io/projected/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-kube-api-access-54tqs\") pod \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\" (UID: \"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab\") " Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.158420 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-kube-api-access-54tqs" (OuterVolumeSpecName: "kube-api-access-54tqs") pod "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" (UID: "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab"). InnerVolumeSpecName "kube-api-access-54tqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.158440 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-scripts" (OuterVolumeSpecName: "scripts") pod "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" (UID: "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.193726 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" (UID: "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.197418 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-config-data" (OuterVolumeSpecName: "config-data") pod "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" (UID: "1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.256579 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.256632 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.256646 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54tqs\" (UniqueName: \"kubernetes.io/projected/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-kube-api-access-54tqs\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.256661 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.592339 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.592313 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tnsn2" event={"ID":"1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab","Type":"ContainerDied","Data":"8de496070dc8f31386bc39145517452a2f432e0d9c4fa3e17649d1fbab2e63bc"} Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.593701 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de496070dc8f31386bc39145517452a2f432e0d9c4fa3e17649d1fbab2e63bc" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.694371 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 18:30:30 crc kubenswrapper[4688]: E0123 18:30:30.697064 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" containerName="nova-cell1-conductor-db-sync" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.697163 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" containerName="nova-cell1-conductor-db-sync" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.698520 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" containerName="nova-cell1-conductor-db-sync" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.700095 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.704074 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.753654 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.768868 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635921a5-2c42-44a0-8c9d-b1f9d5230145-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.768987 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7cmf\" (UniqueName: \"kubernetes.io/projected/635921a5-2c42-44a0-8c9d-b1f9d5230145-kube-api-access-s7cmf\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.769321 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635921a5-2c42-44a0-8c9d-b1f9d5230145-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.871731 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7cmf\" (UniqueName: \"kubernetes.io/projected/635921a5-2c42-44a0-8c9d-b1f9d5230145-kube-api-access-s7cmf\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.871912 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635921a5-2c42-44a0-8c9d-b1f9d5230145-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.871994 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635921a5-2c42-44a0-8c9d-b1f9d5230145-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.876669 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/635921a5-2c42-44a0-8c9d-b1f9d5230145-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.877471 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/635921a5-2c42-44a0-8c9d-b1f9d5230145-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.904767 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7cmf\" (UniqueName: \"kubernetes.io/projected/635921a5-2c42-44a0-8c9d-b1f9d5230145-kube-api-access-s7cmf\") pod \"nova-cell1-conductor-0\" (UID: \"635921a5-2c42-44a0-8c9d-b1f9d5230145\") " pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:30 crc kubenswrapper[4688]: I0123 18:30:30.922632 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 18:30:31 crc kubenswrapper[4688]: I0123 18:30:31.041689 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:31 crc kubenswrapper[4688]: I0123 18:30:31.572971 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 18:30:31 crc kubenswrapper[4688]: W0123 18:30:31.575922 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod635921a5_2c42_44a0_8c9d_b1f9d5230145.slice/crio-322317524f99b02965f5def7476ed00bdd47992a05bb000ce3be285072275417 WatchSource:0}: Error finding container 322317524f99b02965f5def7476ed00bdd47992a05bb000ce3be285072275417: Status 404 returned error can't find the container with id 322317524f99b02965f5def7476ed00bdd47992a05bb000ce3be285072275417 Jan 23 18:30:31 crc kubenswrapper[4688]: I0123 18:30:31.606053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"635921a5-2c42-44a0-8c9d-b1f9d5230145","Type":"ContainerStarted","Data":"322317524f99b02965f5def7476ed00bdd47992a05bb000ce3be285072275417"} Jan 23 18:30:32 crc kubenswrapper[4688]: I0123 18:30:32.621574 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"635921a5-2c42-44a0-8c9d-b1f9d5230145","Type":"ContainerStarted","Data":"05909f0d4d607c637378c3473c6d870dcb5aa116e010fe6918cb02edb6ebb037"} Jan 23 18:30:32 crc kubenswrapper[4688]: I0123 18:30:32.621903 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:32 crc kubenswrapper[4688]: I0123 18:30:32.643410 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.643384353 podStartE2EDuration="2.643384353s" podCreationTimestamp="2026-01-23 18:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:32.635199308 +0000 UTC m=+1427.631023749" watchObservedRunningTime="2026-01-23 18:30:32.643384353 +0000 UTC m=+1427.639208794" Jan 23 18:30:35 crc kubenswrapper[4688]: I0123 18:30:35.923594 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 18:30:35 crc kubenswrapper[4688]: I0123 18:30:35.955564 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 18:30:36 crc kubenswrapper[4688]: I0123 18:30:36.073462 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 18:30:36 crc kubenswrapper[4688]: I0123 18:30:36.688431 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 18:30:36 crc kubenswrapper[4688]: I0123 18:30:36.965741 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:30:36 crc kubenswrapper[4688]: I0123 18:30:36.965840 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:30:38 crc kubenswrapper[4688]: I0123 18:30:38.021096 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:30:38 crc kubenswrapper[4688]: I0123 18:30:38.021636 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:30:39 crc kubenswrapper[4688]: I0123 18:30:39.104525 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:30:39 crc kubenswrapper[4688]: I0123 18:30:39.105834 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.791615 4688 generic.go:334] "Generic (PLEG): container finished" podID="71b056ea-ae53-487f-a251-e4bba40fa78d" containerID="81c071448a3c95bd450cef3762419a33f0274774666dd45cf03431eb2cd81eb6" exitCode=137 Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.792176 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"71b056ea-ae53-487f-a251-e4bba40fa78d","Type":"ContainerDied","Data":"81c071448a3c95bd450cef3762419a33f0274774666dd45cf03431eb2cd81eb6"} Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.797151 4688 generic.go:334] "Generic (PLEG): container finished" podID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerID="5f37432c05caf252e571b83626eba8a1a10930382bc31901ae88aa7bba942c02" exitCode=137 Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.797251 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fa5e019-5d26-4fa6-a9c0-a620b15e123d","Type":"ContainerDied","Data":"5f37432c05caf252e571b83626eba8a1a10930382bc31901ae88aa7bba942c02"} Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.938421 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.946952 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.986174 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-config-data\") pod \"71b056ea-ae53-487f-a251-e4bba40fa78d\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.986284 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-logs\") pod \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.986353 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltprd\" (UniqueName: \"kubernetes.io/projected/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-kube-api-access-ltprd\") pod \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.986381 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-config-data\") pod \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.986417 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-combined-ca-bundle\") pod \"71b056ea-ae53-487f-a251-e4bba40fa78d\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.986470 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-combined-ca-bundle\") pod \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\" (UID: \"1fa5e019-5d26-4fa6-a9c0-a620b15e123d\") " Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.986598 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwr8l\" (UniqueName: \"kubernetes.io/projected/71b056ea-ae53-487f-a251-e4bba40fa78d-kube-api-access-rwr8l\") pod \"71b056ea-ae53-487f-a251-e4bba40fa78d\" (UID: \"71b056ea-ae53-487f-a251-e4bba40fa78d\") " Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.990556 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-logs" (OuterVolumeSpecName: "logs") pod "1fa5e019-5d26-4fa6-a9c0-a620b15e123d" (UID: "1fa5e019-5d26-4fa6-a9c0-a620b15e123d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:46 crc kubenswrapper[4688]: I0123 18:30:46.992477 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b056ea-ae53-487f-a251-e4bba40fa78d-kube-api-access-rwr8l" (OuterVolumeSpecName: "kube-api-access-rwr8l") pod "71b056ea-ae53-487f-a251-e4bba40fa78d" (UID: "71b056ea-ae53-487f-a251-e4bba40fa78d"). InnerVolumeSpecName "kube-api-access-rwr8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.003231 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-kube-api-access-ltprd" (OuterVolumeSpecName: "kube-api-access-ltprd") pod "1fa5e019-5d26-4fa6-a9c0-a620b15e123d" (UID: "1fa5e019-5d26-4fa6-a9c0-a620b15e123d"). InnerVolumeSpecName "kube-api-access-ltprd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.024500 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-config-data" (OuterVolumeSpecName: "config-data") pod "71b056ea-ae53-487f-a251-e4bba40fa78d" (UID: "71b056ea-ae53-487f-a251-e4bba40fa78d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.044485 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-config-data" (OuterVolumeSpecName: "config-data") pod "1fa5e019-5d26-4fa6-a9c0-a620b15e123d" (UID: "1fa5e019-5d26-4fa6-a9c0-a620b15e123d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.061955 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71b056ea-ae53-487f-a251-e4bba40fa78d" (UID: "71b056ea-ae53-487f-a251-e4bba40fa78d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.069482 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fa5e019-5d26-4fa6-a9c0-a620b15e123d" (UID: "1fa5e019-5d26-4fa6-a9c0-a620b15e123d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.088335 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.088393 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.088444 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltprd\" (UniqueName: \"kubernetes.io/projected/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-kube-api-access-ltprd\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.088460 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.088475 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b056ea-ae53-487f-a251-e4bba40fa78d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.088489 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fa5e019-5d26-4fa6-a9c0-a620b15e123d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.088501 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwr8l\" (UniqueName: \"kubernetes.io/projected/71b056ea-ae53-487f-a251-e4bba40fa78d-kube-api-access-rwr8l\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.811442 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1fa5e019-5d26-4fa6-a9c0-a620b15e123d","Type":"ContainerDied","Data":"5ceb937617df946015b24aeab12fe85f6a08861636fd06a7fed98e5af10d2aa2"} Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.812070 4688 scope.go:117] "RemoveContainer" containerID="5f37432c05caf252e571b83626eba8a1a10930382bc31901ae88aa7bba942c02" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.811479 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.814475 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"71b056ea-ae53-487f-a251-e4bba40fa78d","Type":"ContainerDied","Data":"bb6d00c16fee720cf28a6c0b85590d36f2380c718d46a435306459f8c337ac08"} Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.814663 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.849448 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.854584 4688 scope.go:117] "RemoveContainer" containerID="f99fecb15d33c7306b4f4588c096a41adb3d842f6a66fdfdffb1cc84078e03d3" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.879535 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.891480 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.897685 4688 scope.go:117] "RemoveContainer" containerID="81c071448a3c95bd450cef3762419a33f0274774666dd45cf03431eb2cd81eb6" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.904174 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.934047 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:47 crc kubenswrapper[4688]: E0123 18:30:47.934918 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b056ea-ae53-487f-a251-e4bba40fa78d" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.934947 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b056ea-ae53-487f-a251-e4bba40fa78d" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 18:30:47 crc kubenswrapper[4688]: E0123 18:30:47.934975 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-log" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.934985 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-log" Jan 23 18:30:47 crc kubenswrapper[4688]: E0123 18:30:47.935020 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-metadata" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.935031 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-metadata" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.935424 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b056ea-ae53-487f-a251-e4bba40fa78d" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.935476 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-log" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.935491 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" containerName="nova-metadata-metadata" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.937038 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.939398 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.942893 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.943148 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.951848 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.954566 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.957710 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.957847 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.972796 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:47 crc kubenswrapper[4688]: I0123 18:30:47.985678 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.025252 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.025338 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.026021 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.026042 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.029507 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.033715 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118105 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118206 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fl9\" (UniqueName: \"kubernetes.io/projected/fe552058-5e47-429c-ac41-e315827552ab-kube-api-access-75fl9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118472 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxgnl\" (UniqueName: \"kubernetes.io/projected/87f3ed51-e668-400a-b833-cb63cc5c5632-kube-api-access-rxgnl\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118559 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118628 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87f3ed51-e668-400a-b833-cb63cc5c5632-logs\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118737 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-config-data\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.118872 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.119141 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.119259 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.225958 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fl9\" (UniqueName: \"kubernetes.io/projected/fe552058-5e47-429c-ac41-e315827552ab-kube-api-access-75fl9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226065 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxgnl\" (UniqueName: \"kubernetes.io/projected/87f3ed51-e668-400a-b833-cb63cc5c5632-kube-api-access-rxgnl\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226098 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226134 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87f3ed51-e668-400a-b833-cb63cc5c5632-logs\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226199 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226224 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-config-data\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226242 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226347 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226396 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.226582 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.231738 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87f3ed51-e668-400a-b833-cb63cc5c5632-logs\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.238800 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.239558 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.240004 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.242518 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.242930 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-config-data\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.252275 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe552058-5e47-429c-ac41-e315827552ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.263637 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.278921 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fl9\" (UniqueName: \"kubernetes.io/projected/fe552058-5e47-429c-ac41-e315827552ab-kube-api-access-75fl9\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe552058-5e47-429c-ac41-e315827552ab\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.308869 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxgnl\" (UniqueName: \"kubernetes.io/projected/87f3ed51-e668-400a-b833-cb63cc5c5632-kube-api-access-rxgnl\") pod \"nova-metadata-0\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.321023 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rk52f"] Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.323285 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.347226 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rk52f"] Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.435597 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.435651 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.435715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.435813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-config\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.435888 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g7j4\" (UniqueName: \"kubernetes.io/projected/1db3efae-8276-4970-9593-b92065efdc42-kube-api-access-8g7j4\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.435968 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.537758 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-config\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.537890 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g7j4\" (UniqueName: \"kubernetes.io/projected/1db3efae-8276-4970-9593-b92065efdc42-kube-api-access-8g7j4\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.537967 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.538035 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.538055 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.538089 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.538971 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-config\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.539059 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.539086 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.539345 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.539686 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.561739 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.568907 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g7j4\" (UniqueName: \"kubernetes.io/projected/1db3efae-8276-4970-9593-b92065efdc42-kube-api-access-8g7j4\") pod \"dnsmasq-dns-89c5cd4d5-rk52f\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.575456 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:30:48 crc kubenswrapper[4688]: I0123 18:30:48.707639 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.322796 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.378853 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa5e019-5d26-4fa6-a9c0-a620b15e123d" path="/var/lib/kubelet/pods/1fa5e019-5d26-4fa6-a9c0-a620b15e123d/volumes" Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.379576 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b056ea-ae53-487f-a251-e4bba40fa78d" path="/var/lib/kubelet/pods/71b056ea-ae53-487f-a251-e4bba40fa78d/volumes" Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.506805 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.617421 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rk52f"] Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.954421 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe552058-5e47-429c-ac41-e315827552ab","Type":"ContainerStarted","Data":"29d2a9dc28181725a16678ea5ff4b7f6b0fa9a516d07b132edbd4ad8e81c03ee"} Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.956607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"87f3ed51-e668-400a-b833-cb63cc5c5632","Type":"ContainerStarted","Data":"8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af"} Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.956644 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"87f3ed51-e668-400a-b833-cb63cc5c5632","Type":"ContainerStarted","Data":"be129de8e38a82b048ca5a8b0d927e45889c2ddc5eb3d734951194c9302b1787"} Jan 23 18:30:49 crc kubenswrapper[4688]: I0123 18:30:49.957652 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" event={"ID":"1db3efae-8276-4970-9593-b92065efdc42","Type":"ContainerStarted","Data":"3a56f00984fb28ac8f4056cc097f38478a1c1383a158d302266cdf200c75db08"} Jan 23 18:30:50 crc kubenswrapper[4688]: I0123 18:30:50.864679 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 18:30:50 crc kubenswrapper[4688]: I0123 18:30:50.977996 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe552058-5e47-429c-ac41-e315827552ab","Type":"ContainerStarted","Data":"60d98dd279f52fff94648540c72e986e8b2ab0fb5c80157e5b3bde8cbb74d9fc"} Jan 23 18:30:50 crc kubenswrapper[4688]: I0123 18:30:50.981530 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"87f3ed51-e668-400a-b833-cb63cc5c5632","Type":"ContainerStarted","Data":"ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777"} Jan 23 18:30:51 crc kubenswrapper[4688]: I0123 18:30:51.006008 4688 generic.go:334] "Generic (PLEG): container finished" podID="1db3efae-8276-4970-9593-b92065efdc42" containerID="e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54" exitCode=0 Jan 23 18:30:51 crc kubenswrapper[4688]: I0123 18:30:51.006072 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" event={"ID":"1db3efae-8276-4970-9593-b92065efdc42","Type":"ContainerDied","Data":"e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54"} Jan 23 18:30:51 crc kubenswrapper[4688]: I0123 18:30:51.013792 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.013772049 podStartE2EDuration="4.013772049s" podCreationTimestamp="2026-01-23 18:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:51.011672609 +0000 UTC m=+1446.007497050" watchObservedRunningTime="2026-01-23 18:30:51.013772049 +0000 UTC m=+1446.009596490" Jan 23 18:30:51 crc kubenswrapper[4688]: I0123 18:30:51.061546 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.061523305 podStartE2EDuration="4.061523305s" podCreationTimestamp="2026-01-23 18:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:51.036762111 +0000 UTC m=+1446.032586562" watchObservedRunningTime="2026-01-23 18:30:51.061523305 +0000 UTC m=+1446.057347756" Jan 23 18:30:51 crc kubenswrapper[4688]: I0123 18:30:51.461985 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:51 crc kubenswrapper[4688]: I0123 18:30:51.462566 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-log" containerID="cri-o://d96062600c38e06b39d1257f648ee17aca74d224ca25521bf5b665a67f8064d6" gracePeriod=30 Jan 23 18:30:51 crc kubenswrapper[4688]: I0123 18:30:51.462656 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-api" containerID="cri-o://cb6878210b64656c467a07c92885e2108ce19d43881e70cd4fc3cd9d35c21748" gracePeriod=30 Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.020744 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" event={"ID":"1db3efae-8276-4970-9593-b92065efdc42","Type":"ContainerStarted","Data":"f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7"} Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.021945 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.026316 4688 generic.go:334] "Generic (PLEG): container finished" podID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerID="d96062600c38e06b39d1257f648ee17aca74d224ca25521bf5b665a67f8064d6" exitCode=143 Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.026623 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4ecc68b-c045-4370-a591-6480cc99e21e","Type":"ContainerDied","Data":"d96062600c38e06b39d1257f648ee17aca74d224ca25521bf5b665a67f8064d6"} Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.044469 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" podStartSLOduration=4.044438203 podStartE2EDuration="4.044438203s" podCreationTimestamp="2026-01-23 18:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:52.040874751 +0000 UTC m=+1447.036699202" watchObservedRunningTime="2026-01-23 18:30:52.044438203 +0000 UTC m=+1447.040262644" Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.160019 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.160876 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-central-agent" containerID="cri-o://9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de" gracePeriod=30 Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.160952 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="proxy-httpd" containerID="cri-o://9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee" gracePeriod=30 Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.160971 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="sg-core" containerID="cri-o://bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1" gracePeriod=30 Jan 23 18:30:52 crc kubenswrapper[4688]: I0123 18:30:52.160990 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-notification-agent" containerID="cri-o://2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509" gracePeriod=30 Jan 23 18:30:53 crc kubenswrapper[4688]: I0123 18:30:53.043908 4688 generic.go:334] "Generic (PLEG): container finished" podID="769d6526-d580-49cb-9c8b-01443462469d" containerID="9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee" exitCode=0 Jan 23 18:30:53 crc kubenswrapper[4688]: I0123 18:30:53.043960 4688 generic.go:334] "Generic (PLEG): container finished" podID="769d6526-d580-49cb-9c8b-01443462469d" containerID="bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1" exitCode=2 Jan 23 18:30:53 crc kubenswrapper[4688]: I0123 18:30:53.043968 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerDied","Data":"9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee"} Jan 23 18:30:53 crc kubenswrapper[4688]: I0123 18:30:53.044024 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerDied","Data":"bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1"} Jan 23 18:30:53 crc kubenswrapper[4688]: I0123 18:30:53.564397 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:53 crc kubenswrapper[4688]: I0123 18:30:53.575671 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:30:53 crc kubenswrapper[4688]: I0123 18:30:53.575733 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:30:54 crc kubenswrapper[4688]: I0123 18:30:54.057003 4688 generic.go:334] "Generic (PLEG): container finished" podID="769d6526-d580-49cb-9c8b-01443462469d" containerID="9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de" exitCode=0 Jan 23 18:30:54 crc kubenswrapper[4688]: I0123 18:30:54.057076 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerDied","Data":"9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de"} Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.075260 4688 generic.go:334] "Generic (PLEG): container finished" podID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerID="cb6878210b64656c467a07c92885e2108ce19d43881e70cd4fc3cd9d35c21748" exitCode=0 Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.075357 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4ecc68b-c045-4370-a591-6480cc99e21e","Type":"ContainerDied","Data":"cb6878210b64656c467a07c92885e2108ce19d43881e70cd4fc3cd9d35c21748"} Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.419359 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.512423 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgxw4\" (UniqueName: \"kubernetes.io/projected/c4ecc68b-c045-4370-a591-6480cc99e21e-kube-api-access-lgxw4\") pod \"c4ecc68b-c045-4370-a591-6480cc99e21e\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.515113 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ecc68b-c045-4370-a591-6480cc99e21e-logs\") pod \"c4ecc68b-c045-4370-a591-6480cc99e21e\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.515423 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-config-data\") pod \"c4ecc68b-c045-4370-a591-6480cc99e21e\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.515538 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-combined-ca-bundle\") pod \"c4ecc68b-c045-4370-a591-6480cc99e21e\" (UID: \"c4ecc68b-c045-4370-a591-6480cc99e21e\") " Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.516176 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ecc68b-c045-4370-a591-6480cc99e21e-logs" (OuterVolumeSpecName: "logs") pod "c4ecc68b-c045-4370-a591-6480cc99e21e" (UID: "c4ecc68b-c045-4370-a591-6480cc99e21e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.516540 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4ecc68b-c045-4370-a591-6480cc99e21e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.540107 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ecc68b-c045-4370-a591-6480cc99e21e-kube-api-access-lgxw4" (OuterVolumeSpecName: "kube-api-access-lgxw4") pod "c4ecc68b-c045-4370-a591-6480cc99e21e" (UID: "c4ecc68b-c045-4370-a591-6480cc99e21e"). InnerVolumeSpecName "kube-api-access-lgxw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.561741 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4ecc68b-c045-4370-a591-6480cc99e21e" (UID: "c4ecc68b-c045-4370-a591-6480cc99e21e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.593571 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-config-data" (OuterVolumeSpecName: "config-data") pod "c4ecc68b-c045-4370-a591-6480cc99e21e" (UID: "c4ecc68b-c045-4370-a591-6480cc99e21e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.618455 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.618502 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4ecc68b-c045-4370-a591-6480cc99e21e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:55 crc kubenswrapper[4688]: I0123 18:30:55.618522 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgxw4\" (UniqueName: \"kubernetes.io/projected/c4ecc68b-c045-4370-a591-6480cc99e21e-kube-api-access-lgxw4\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.089741 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4ecc68b-c045-4370-a591-6480cc99e21e","Type":"ContainerDied","Data":"f3b04eda317d05178b2e1e9bb552f37cca6d66c77278f335f92d6143a345df32"} Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.089820 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.090924 4688 scope.go:117] "RemoveContainer" containerID="cb6878210b64656c467a07c92885e2108ce19d43881e70cd4fc3cd9d35c21748" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.134124 4688 scope.go:117] "RemoveContainer" containerID="d96062600c38e06b39d1257f648ee17aca74d224ca25521bf5b665a67f8064d6" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.233970 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.246261 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.258437 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:56 crc kubenswrapper[4688]: E0123 18:30:56.259058 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-log" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.259077 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-log" Jan 23 18:30:56 crc kubenswrapper[4688]: E0123 18:30:56.259105 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-api" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.259113 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-api" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.259447 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-log" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.259466 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" containerName="nova-api-api" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.260903 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.272083 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.274942 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.352201 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.356591 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a39100-2407-4654-aa43-fd39b72cb205-logs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.356708 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.356742 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzbm\" (UniqueName: \"kubernetes.io/projected/d4a39100-2407-4654-aa43-fd39b72cb205-kube-api-access-5nzbm\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.356795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.356860 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-config-data\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.356907 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.409995 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.459252 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-config-data\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.459374 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.459439 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a39100-2407-4654-aa43-fd39b72cb205-logs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.459564 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.459600 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nzbm\" (UniqueName: \"kubernetes.io/projected/d4a39100-2407-4654-aa43-fd39b72cb205-kube-api-access-5nzbm\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.459691 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.463623 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a39100-2407-4654-aa43-fd39b72cb205-logs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.467581 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.468508 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-config-data\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.469916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.472815 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.486890 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nzbm\" (UniqueName: \"kubernetes.io/projected/d4a39100-2407-4654-aa43-fd39b72cb205-kube-api-access-5nzbm\") pod \"nova-api-0\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " pod="openstack/nova-api-0" Jan 23 18:30:56 crc kubenswrapper[4688]: I0123 18:30:56.690488 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.178498 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.373712 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ecc68b-c045-4370-a591-6480cc99e21e" path="/var/lib/kubelet/pods/c4ecc68b-c045-4370-a591-6480cc99e21e/volumes" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.550812 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nn944"] Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.553212 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.567806 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn944"] Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.697144 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz9nz\" (UniqueName: \"kubernetes.io/projected/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-kube-api-access-rz9nz\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.697507 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-utilities\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.697552 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-catalog-content\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.800165 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz9nz\" (UniqueName: \"kubernetes.io/projected/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-kube-api-access-rz9nz\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.800284 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-utilities\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.800315 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-catalog-content\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.800971 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-catalog-content\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.801580 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-utilities\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.805699 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.825816 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz9nz\" (UniqueName: \"kubernetes.io/projected/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-kube-api-access-rz9nz\") pod \"redhat-operators-nn944\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902097 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-sg-core-conf-yaml\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902226 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-combined-ca-bundle\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902323 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-run-httpd\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902405 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-config-data\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902444 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n72zb\" (UniqueName: \"kubernetes.io/projected/769d6526-d580-49cb-9c8b-01443462469d-kube-api-access-n72zb\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902496 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-ceilometer-tls-certs\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902542 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-scripts\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.902616 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-log-httpd\") pod \"769d6526-d580-49cb-9c8b-01443462469d\" (UID: \"769d6526-d580-49cb-9c8b-01443462469d\") " Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.903289 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.903712 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.906424 4688 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.906448 4688 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/769d6526-d580-49cb-9c8b-01443462469d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.906487 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-scripts" (OuterVolumeSpecName: "scripts") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.910316 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/769d6526-d580-49cb-9c8b-01443462469d-kube-api-access-n72zb" (OuterVolumeSpecName: "kube-api-access-n72zb") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "kube-api-access-n72zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.942928 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:57 crc kubenswrapper[4688]: I0123 18:30:57.978563 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.002145 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.008576 4688 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.008607 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.008620 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n72zb\" (UniqueName: \"kubernetes.io/projected/769d6526-d580-49cb-9c8b-01443462469d-kube-api-access-n72zb\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.008633 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.008643 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.028131 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-config-data" (OuterVolumeSpecName: "config-data") pod "769d6526-d580-49cb-9c8b-01443462469d" (UID: "769d6526-d580-49cb-9c8b-01443462469d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.102144 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.111752 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/769d6526-d580-49cb-9c8b-01443462469d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.118413 4688 generic.go:334] "Generic (PLEG): container finished" podID="769d6526-d580-49cb-9c8b-01443462469d" containerID="2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509" exitCode=0 Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.118483 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerDied","Data":"2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509"} Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.118488 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.118517 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"769d6526-d580-49cb-9c8b-01443462469d","Type":"ContainerDied","Data":"5cc36be6d74582169dd764f7c382c95556902b8a71635b528595763263f89822"} Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.118538 4688 scope.go:117] "RemoveContainer" containerID="9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.120880 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4a39100-2407-4654-aa43-fd39b72cb205","Type":"ContainerStarted","Data":"51de707cfc503fbaf6bb396a0534539ee8e82ba4e875c5ba2b80fab3769fde5b"} Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.120904 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4a39100-2407-4654-aa43-fd39b72cb205","Type":"ContainerStarted","Data":"12efca1ff5a4c4d63df10e8025c5f19431d3ff243d0612af98cbc31317862b3a"} Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.120914 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4a39100-2407-4654-aa43-fd39b72cb205","Type":"ContainerStarted","Data":"c2b669420dccbb6f2f0e1c4dfddc595c151ed7ef0044b10db81e78299ac83d9d"} Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.143777 4688 scope.go:117] "RemoveContainer" containerID="bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.153945 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.153924676 podStartE2EDuration="2.153924676s" podCreationTimestamp="2026-01-23 18:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:30:58.151468756 +0000 UTC m=+1453.147293207" watchObservedRunningTime="2026-01-23 18:30:58.153924676 +0000 UTC m=+1453.149749117" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.178827 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.191531 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.211592 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.212039 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-central-agent" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212056 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-central-agent" Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.212072 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="proxy-httpd" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212078 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="proxy-httpd" Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.212108 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-notification-agent" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212114 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-notification-agent" Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.212139 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="sg-core" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212145 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="sg-core" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212325 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-notification-agent" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212342 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="sg-core" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212358 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="ceilometer-central-agent" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.212370 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="769d6526-d580-49cb-9c8b-01443462469d" containerName="proxy-httpd" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.220481 4688 scope.go:117] "RemoveContainer" containerID="2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.220902 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.225850 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.226079 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.226225 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.288243 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.300140 4688 scope.go:117] "RemoveContainer" containerID="9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316499 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9fb5995-71ba-46d0-8e43-e5325af334dd-log-httpd\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316548 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316634 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9fb5995-71ba-46d0-8e43-e5325af334dd-run-httpd\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316762 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-config-data\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316813 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316840 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-scripts\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316892 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.316919 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p45tc\" (UniqueName: \"kubernetes.io/projected/a9fb5995-71ba-46d0-8e43-e5325af334dd-kube-api-access-p45tc\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.349109 4688 scope.go:117] "RemoveContainer" containerID="9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee" Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.349514 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee\": container with ID starting with 9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee not found: ID does not exist" containerID="9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.349546 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee"} err="failed to get container status \"9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee\": rpc error: code = NotFound desc = could not find container \"9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee\": container with ID starting with 9c068135536772b3faefe0f0038fcf575172e5b2f48e7ad84279f0defac61bee not found: ID does not exist" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.349578 4688 scope.go:117] "RemoveContainer" containerID="bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1" Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.349827 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1\": container with ID starting with bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1 not found: ID does not exist" containerID="bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.349845 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1"} err="failed to get container status \"bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1\": rpc error: code = NotFound desc = could not find container \"bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1\": container with ID starting with bec172c3560b98340a0c5417664740150e75ba0a2738dfa10ada18e7eda42ea1 not found: ID does not exist" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.349860 4688 scope.go:117] "RemoveContainer" containerID="2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509" Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.350158 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509\": container with ID starting with 2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509 not found: ID does not exist" containerID="2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.350196 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509"} err="failed to get container status \"2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509\": rpc error: code = NotFound desc = could not find container \"2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509\": container with ID starting with 2407c20aa52b339fc2bfbcd6c2cbe4e69fd84c1f18021cccf2a0e887c444f509 not found: ID does not exist" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.350217 4688 scope.go:117] "RemoveContainer" containerID="9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de" Jan 23 18:30:58 crc kubenswrapper[4688]: E0123 18:30:58.350502 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de\": container with ID starting with 9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de not found: ID does not exist" containerID="9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.350520 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de"} err="failed to get container status \"9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de\": rpc error: code = NotFound desc = could not find container \"9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de\": container with ID starting with 9352ef1d9f67813fdd2283c34563a12b61befbff952d4a15655544c1296d55de not found: ID does not exist" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.418755 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9fb5995-71ba-46d0-8e43-e5325af334dd-run-httpd\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.418887 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-config-data\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.418943 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.418961 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-scripts\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.419023 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p45tc\" (UniqueName: \"kubernetes.io/projected/a9fb5995-71ba-46d0-8e43-e5325af334dd-kube-api-access-p45tc\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.419042 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.419204 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9fb5995-71ba-46d0-8e43-e5325af334dd-log-httpd\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.419231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.421019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9fb5995-71ba-46d0-8e43-e5325af334dd-run-httpd\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.422173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9fb5995-71ba-46d0-8e43-e5325af334dd-log-httpd\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.425692 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.425812 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.431895 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.432774 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-config-data\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.441530 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p45tc\" (UniqueName: \"kubernetes.io/projected/a9fb5995-71ba-46d0-8e43-e5325af334dd-kube-api-access-p45tc\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.480555 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9fb5995-71ba-46d0-8e43-e5325af334dd-scripts\") pod \"ceilometer-0\" (UID: \"a9fb5995-71ba-46d0-8e43-e5325af334dd\") " pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.562718 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.577740 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.577793 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.582840 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.629435 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.710372 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.765957 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nn944"] Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.802977 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jnwhl"] Jan 23 18:30:58 crc kubenswrapper[4688]: I0123 18:30:58.803330 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" podUID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerName="dnsmasq-dns" containerID="cri-o://45fad2ae8c5a88a7d0f0ceb2499f666e9eb96a0ecfcd0bcee87664b75350d2d8" gracePeriod=10 Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.154629 4688 generic.go:334] "Generic (PLEG): container finished" podID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerID="e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af" exitCode=0 Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.154989 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn944" event={"ID":"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e","Type":"ContainerDied","Data":"e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af"} Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.155038 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn944" event={"ID":"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e","Type":"ContainerStarted","Data":"820516e6ef66f8c74f7f8f105332b716154948194bbf1794514d6eb79f986b4f"} Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.171714 4688 generic.go:334] "Generic (PLEG): container finished" podID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerID="45fad2ae8c5a88a7d0f0ceb2499f666e9eb96a0ecfcd0bcee87664b75350d2d8" exitCode=0 Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.172976 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" event={"ID":"e51086ce-d00f-4b91-82e5-fd207f2908b2","Type":"ContainerDied","Data":"45fad2ae8c5a88a7d0f0ceb2499f666e9eb96a0ecfcd0bcee87664b75350d2d8"} Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.318945 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.322432 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.408163 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="769d6526-d580-49cb-9c8b-01443462469d" path="/var/lib/kubelet/pods/769d6526-d580-49cb-9c8b-01443462469d/volumes" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.596469 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.596806 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.603873 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.651872 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-z7vfb"] Jan 23 18:30:59 crc kubenswrapper[4688]: E0123 18:30:59.653734 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerName="init" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.653766 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerName="init" Jan 23 18:30:59 crc kubenswrapper[4688]: E0123 18:30:59.653804 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerName="dnsmasq-dns" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.653815 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerName="dnsmasq-dns" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.654210 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51086ce-d00f-4b91-82e5-fd207f2908b2" containerName="dnsmasq-dns" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.664037 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.668311 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.669178 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.729262 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7vfb"] Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.775080 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-svc\") pod \"e51086ce-d00f-4b91-82e5-fd207f2908b2\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.775252 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-nb\") pod \"e51086ce-d00f-4b91-82e5-fd207f2908b2\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.775317 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-swift-storage-0\") pod \"e51086ce-d00f-4b91-82e5-fd207f2908b2\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.775429 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv6nn\" (UniqueName: \"kubernetes.io/projected/e51086ce-d00f-4b91-82e5-fd207f2908b2-kube-api-access-fv6nn\") pod \"e51086ce-d00f-4b91-82e5-fd207f2908b2\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.775466 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-config\") pod \"e51086ce-d00f-4b91-82e5-fd207f2908b2\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.775603 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-sb\") pod \"e51086ce-d00f-4b91-82e5-fd207f2908b2\" (UID: \"e51086ce-d00f-4b91-82e5-fd207f2908b2\") " Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.776050 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.776119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8dg\" (UniqueName: \"kubernetes.io/projected/82599c77-bc56-4a8b-a55a-e18645e80522-kube-api-access-6h8dg\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.776208 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-config-data\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.776354 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-scripts\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.821563 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51086ce-d00f-4b91-82e5-fd207f2908b2-kube-api-access-fv6nn" (OuterVolumeSpecName: "kube-api-access-fv6nn") pod "e51086ce-d00f-4b91-82e5-fd207f2908b2" (UID: "e51086ce-d00f-4b91-82e5-fd207f2908b2"). InnerVolumeSpecName "kube-api-access-fv6nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.880995 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e51086ce-d00f-4b91-82e5-fd207f2908b2" (UID: "e51086ce-d00f-4b91-82e5-fd207f2908b2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.881087 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-scripts\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.881221 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.881276 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8dg\" (UniqueName: \"kubernetes.io/projected/82599c77-bc56-4a8b-a55a-e18645e80522-kube-api-access-6h8dg\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.881330 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-config-data\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.881430 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.881441 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv6nn\" (UniqueName: \"kubernetes.io/projected/e51086ce-d00f-4b91-82e5-fd207f2908b2-kube-api-access-fv6nn\") on node \"crc\" DevicePath \"\"" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.887774 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-config-data\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.892877 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.910915 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-scripts\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.931519 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8dg\" (UniqueName: \"kubernetes.io/projected/82599c77-bc56-4a8b-a55a-e18645e80522-kube-api-access-6h8dg\") pod \"nova-cell1-cell-mapping-z7vfb\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:30:59 crc kubenswrapper[4688]: I0123 18:30:59.994684 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.002923 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-config" (OuterVolumeSpecName: "config") pod "e51086ce-d00f-4b91-82e5-fd207f2908b2" (UID: "e51086ce-d00f-4b91-82e5-fd207f2908b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.018467 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e51086ce-d00f-4b91-82e5-fd207f2908b2" (UID: "e51086ce-d00f-4b91-82e5-fd207f2908b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.041853 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e51086ce-d00f-4b91-82e5-fd207f2908b2" (UID: "e51086ce-d00f-4b91-82e5-fd207f2908b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.075824 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e51086ce-d00f-4b91-82e5-fd207f2908b2" (UID: "e51086ce-d00f-4b91-82e5-fd207f2908b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.091943 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.091976 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.091987 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.091995 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51086ce-d00f-4b91-82e5-fd207f2908b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.188559 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9fb5995-71ba-46d0-8e43-e5325af334dd","Type":"ContainerStarted","Data":"e529f31cda04a4951004c46929abeb39c7bfa3ed8482d4dae317ef2e54e1330d"} Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.197503 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" event={"ID":"e51086ce-d00f-4b91-82e5-fd207f2908b2","Type":"ContainerDied","Data":"3a581487d4659a88eeedf1914114e9c84fc35229fbf11b2e11c7151f063226ae"} Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.197555 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jnwhl" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.197563 4688 scope.go:117] "RemoveContainer" containerID="45fad2ae8c5a88a7d0f0ceb2499f666e9eb96a0ecfcd0bcee87664b75350d2d8" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.368522 4688 scope.go:117] "RemoveContainer" containerID="e859d0602875c9b964880e05540c15c7c13112b533fe9ced71cb67a216bd2234" Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.415855 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jnwhl"] Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.428417 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jnwhl"] Jan 23 18:31:00 crc kubenswrapper[4688]: I0123 18:31:00.738344 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7vfb"] Jan 23 18:31:00 crc kubenswrapper[4688]: W0123 18:31:00.752210 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82599c77_bc56_4a8b_a55a_e18645e80522.slice/crio-19c174090c87762fb80efab39ca8fb5101c3b9b27372066de4b4272bc09fe2c4 WatchSource:0}: Error finding container 19c174090c87762fb80efab39ca8fb5101c3b9b27372066de4b4272bc09fe2c4: Status 404 returned error can't find the container with id 19c174090c87762fb80efab39ca8fb5101c3b9b27372066de4b4272bc09fe2c4 Jan 23 18:31:01 crc kubenswrapper[4688]: I0123 18:31:01.212991 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9fb5995-71ba-46d0-8e43-e5325af334dd","Type":"ContainerStarted","Data":"6c951db3d95fb1f524686da48cb5013ee9ec7bb2902179b7a66c9b00731b68b9"} Jan 23 18:31:01 crc kubenswrapper[4688]: I0123 18:31:01.215650 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7vfb" event={"ID":"82599c77-bc56-4a8b-a55a-e18645e80522","Type":"ContainerStarted","Data":"531a509f503e7e517945c6cc11c25f9659de7af2d1bb94ddd7920f1f0e9e443f"} Jan 23 18:31:01 crc kubenswrapper[4688]: I0123 18:31:01.215708 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7vfb" event={"ID":"82599c77-bc56-4a8b-a55a-e18645e80522","Type":"ContainerStarted","Data":"19c174090c87762fb80efab39ca8fb5101c3b9b27372066de4b4272bc09fe2c4"} Jan 23 18:31:01 crc kubenswrapper[4688]: I0123 18:31:01.221746 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn944" event={"ID":"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e","Type":"ContainerStarted","Data":"41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69"} Jan 23 18:31:01 crc kubenswrapper[4688]: I0123 18:31:01.243159 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-z7vfb" podStartSLOduration=2.24313657 podStartE2EDuration="2.24313657s" podCreationTimestamp="2026-01-23 18:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:31:01.238589769 +0000 UTC m=+1456.234414210" watchObservedRunningTime="2026-01-23 18:31:01.24313657 +0000 UTC m=+1456.238961011" Jan 23 18:31:01 crc kubenswrapper[4688]: I0123 18:31:01.382927 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51086ce-d00f-4b91-82e5-fd207f2908b2" path="/var/lib/kubelet/pods/e51086ce-d00f-4b91-82e5-fd207f2908b2/volumes" Jan 23 18:31:02 crc kubenswrapper[4688]: I0123 18:31:02.236251 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9fb5995-71ba-46d0-8e43-e5325af334dd","Type":"ContainerStarted","Data":"e2426b70deca52337f9cf3f5f007b9ac1249990c0864f204d2f53fb58cb450c1"} Jan 23 18:31:04 crc kubenswrapper[4688]: I0123 18:31:04.274000 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9fb5995-71ba-46d0-8e43-e5325af334dd","Type":"ContainerStarted","Data":"d494dc859806705c325630d4f07a95ea07e1ef762c74fb91054eba8702545250"} Jan 23 18:31:06 crc kubenswrapper[4688]: I0123 18:31:06.320213 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9fb5995-71ba-46d0-8e43-e5325af334dd","Type":"ContainerStarted","Data":"31c2b842ba66b3dd980e335a360136dbac729cafc300ce83e00b4416356f6f2d"} Jan 23 18:31:06 crc kubenswrapper[4688]: I0123 18:31:06.323725 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 18:31:06 crc kubenswrapper[4688]: I0123 18:31:06.346767 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2019424 podStartE2EDuration="8.346743531s" podCreationTimestamp="2026-01-23 18:30:58 +0000 UTC" firstStartedPulling="2026-01-23 18:30:59.325516199 +0000 UTC m=+1454.321340640" lastFinishedPulling="2026-01-23 18:31:05.47031733 +0000 UTC m=+1460.466141771" observedRunningTime="2026-01-23 18:31:06.344362133 +0000 UTC m=+1461.340186594" watchObservedRunningTime="2026-01-23 18:31:06.346743531 +0000 UTC m=+1461.342567972" Jan 23 18:31:06 crc kubenswrapper[4688]: I0123 18:31:06.691698 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:31:06 crc kubenswrapper[4688]: I0123 18:31:06.693441 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:31:06 crc kubenswrapper[4688]: I0123 18:31:06.964954 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:31:06 crc kubenswrapper[4688]: I0123 18:31:06.965417 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:31:07 crc kubenswrapper[4688]: I0123 18:31:07.704393 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:31:07 crc kubenswrapper[4688]: I0123 18:31:07.704433 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.214:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:31:08 crc kubenswrapper[4688]: I0123 18:31:08.342837 4688 generic.go:334] "Generic (PLEG): container finished" podID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerID="41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69" exitCode=0 Jan 23 18:31:08 crc kubenswrapper[4688]: I0123 18:31:08.342922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn944" event={"ID":"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e","Type":"ContainerDied","Data":"41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69"} Jan 23 18:31:08 crc kubenswrapper[4688]: I0123 18:31:08.585198 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:31:08 crc kubenswrapper[4688]: I0123 18:31:08.587117 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:31:08 crc kubenswrapper[4688]: I0123 18:31:08.591299 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:31:09 crc kubenswrapper[4688]: I0123 18:31:09.373666 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn944" event={"ID":"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e","Type":"ContainerStarted","Data":"4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a"} Jan 23 18:31:09 crc kubenswrapper[4688]: I0123 18:31:09.374020 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:31:10 crc kubenswrapper[4688]: I0123 18:31:10.366964 4688 generic.go:334] "Generic (PLEG): container finished" podID="82599c77-bc56-4a8b-a55a-e18645e80522" containerID="531a509f503e7e517945c6cc11c25f9659de7af2d1bb94ddd7920f1f0e9e443f" exitCode=0 Jan 23 18:31:10 crc kubenswrapper[4688]: I0123 18:31:10.367032 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7vfb" event={"ID":"82599c77-bc56-4a8b-a55a-e18645e80522","Type":"ContainerDied","Data":"531a509f503e7e517945c6cc11c25f9659de7af2d1bb94ddd7920f1f0e9e443f"} Jan 23 18:31:10 crc kubenswrapper[4688]: I0123 18:31:10.407620 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nn944" podStartSLOduration=3.793494139 podStartE2EDuration="13.40759682s" podCreationTimestamp="2026-01-23 18:30:57 +0000 UTC" firstStartedPulling="2026-01-23 18:30:59.159528159 +0000 UTC m=+1454.155352600" lastFinishedPulling="2026-01-23 18:31:08.77363084 +0000 UTC m=+1463.769455281" observedRunningTime="2026-01-23 18:31:09.449486495 +0000 UTC m=+1464.445310936" watchObservedRunningTime="2026-01-23 18:31:10.40759682 +0000 UTC m=+1465.403421261" Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.802431 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.928633 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-combined-ca-bundle\") pod \"82599c77-bc56-4a8b-a55a-e18645e80522\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.928889 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-config-data\") pod \"82599c77-bc56-4a8b-a55a-e18645e80522\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.928934 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8dg\" (UniqueName: \"kubernetes.io/projected/82599c77-bc56-4a8b-a55a-e18645e80522-kube-api-access-6h8dg\") pod \"82599c77-bc56-4a8b-a55a-e18645e80522\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.929158 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-scripts\") pod \"82599c77-bc56-4a8b-a55a-e18645e80522\" (UID: \"82599c77-bc56-4a8b-a55a-e18645e80522\") " Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.935415 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-scripts" (OuterVolumeSpecName: "scripts") pod "82599c77-bc56-4a8b-a55a-e18645e80522" (UID: "82599c77-bc56-4a8b-a55a-e18645e80522"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.954850 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82599c77-bc56-4a8b-a55a-e18645e80522-kube-api-access-6h8dg" (OuterVolumeSpecName: "kube-api-access-6h8dg") pod "82599c77-bc56-4a8b-a55a-e18645e80522" (UID: "82599c77-bc56-4a8b-a55a-e18645e80522"). InnerVolumeSpecName "kube-api-access-6h8dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.963970 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82599c77-bc56-4a8b-a55a-e18645e80522" (UID: "82599c77-bc56-4a8b-a55a-e18645e80522"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:11 crc kubenswrapper[4688]: I0123 18:31:11.965718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-config-data" (OuterVolumeSpecName: "config-data") pod "82599c77-bc56-4a8b-a55a-e18645e80522" (UID: "82599c77-bc56-4a8b-a55a-e18645e80522"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.032442 4688 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.032487 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.032500 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82599c77-bc56-4a8b-a55a-e18645e80522-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.032509 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h8dg\" (UniqueName: \"kubernetes.io/projected/82599c77-bc56-4a8b-a55a-e18645e80522-kube-api-access-6h8dg\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.450425 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7vfb" event={"ID":"82599c77-bc56-4a8b-a55a-e18645e80522","Type":"ContainerDied","Data":"19c174090c87762fb80efab39ca8fb5101c3b9b27372066de4b4272bc09fe2c4"} Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.450491 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c174090c87762fb80efab39ca8fb5101c3b9b27372066de4b4272bc09fe2c4" Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.450587 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7vfb" Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.615011 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.615349 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-log" containerID="cri-o://12efca1ff5a4c4d63df10e8025c5f19431d3ff243d0612af98cbc31317862b3a" gracePeriod=30 Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.615604 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-api" containerID="cri-o://51de707cfc503fbaf6bb396a0534539ee8e82ba4e875c5ba2b80fab3769fde5b" gracePeriod=30 Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.633127 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.633427 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f8fc4c6b-d528-4701-8cc1-31553b942468" containerName="nova-scheduler-scheduler" containerID="cri-o://f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53" gracePeriod=30 Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.678873 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.679201 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-log" containerID="cri-o://8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af" gracePeriod=30 Jan 23 18:31:12 crc kubenswrapper[4688]: I0123 18:31:12.679344 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-metadata" containerID="cri-o://ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777" gracePeriod=30 Jan 23 18:31:13 crc kubenswrapper[4688]: I0123 18:31:13.466492 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4a39100-2407-4654-aa43-fd39b72cb205" containerID="12efca1ff5a4c4d63df10e8025c5f19431d3ff243d0612af98cbc31317862b3a" exitCode=143 Jan 23 18:31:13 crc kubenswrapper[4688]: I0123 18:31:13.466826 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4a39100-2407-4654-aa43-fd39b72cb205","Type":"ContainerDied","Data":"12efca1ff5a4c4d63df10e8025c5f19431d3ff243d0612af98cbc31317862b3a"} Jan 23 18:31:13 crc kubenswrapper[4688]: I0123 18:31:13.468956 4688 generic.go:334] "Generic (PLEG): container finished" podID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerID="8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af" exitCode=143 Jan 23 18:31:13 crc kubenswrapper[4688]: I0123 18:31:13.468991 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"87f3ed51-e668-400a-b833-cb63cc5c5632","Type":"ContainerDied","Data":"8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af"} Jan 23 18:31:15 crc kubenswrapper[4688]: I0123 18:31:15.494890 4688 generic.go:334] "Generic (PLEG): container finished" podID="f8fc4c6b-d528-4701-8cc1-31553b942468" containerID="f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53" exitCode=0 Jan 23 18:31:15 crc kubenswrapper[4688]: I0123 18:31:15.495010 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8fc4c6b-d528-4701-8cc1-31553b942468","Type":"ContainerDied","Data":"f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53"} Jan 23 18:31:15 crc kubenswrapper[4688]: I0123 18:31:15.845570 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:56292->10.217.0.212:8775: read: connection reset by peer" Jan 23 18:31:15 crc kubenswrapper[4688]: I0123 18:31:15.845603 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": read tcp 10.217.0.2:56294->10.217.0.212:8775: read: connection reset by peer" Jan 23 18:31:15 crc kubenswrapper[4688]: E0123 18:31:15.923928 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53 is running failed: container process not found" containerID="f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:31:15 crc kubenswrapper[4688]: E0123 18:31:15.924512 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53 is running failed: container process not found" containerID="f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:31:15 crc kubenswrapper[4688]: E0123 18:31:15.924988 4688 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53 is running failed: container process not found" containerID="f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 18:31:15 crc kubenswrapper[4688]: E0123 18:31:15.925070 4688 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f8fc4c6b-d528-4701-8cc1-31553b942468" containerName="nova-scheduler-scheduler" Jan 23 18:31:15 crc kubenswrapper[4688]: I0123 18:31:15.991214 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.155115 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gpkr\" (UniqueName: \"kubernetes.io/projected/f8fc4c6b-d528-4701-8cc1-31553b942468-kube-api-access-9gpkr\") pod \"f8fc4c6b-d528-4701-8cc1-31553b942468\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.155467 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-config-data\") pod \"f8fc4c6b-d528-4701-8cc1-31553b942468\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.155553 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-combined-ca-bundle\") pod \"f8fc4c6b-d528-4701-8cc1-31553b942468\" (UID: \"f8fc4c6b-d528-4701-8cc1-31553b942468\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.165727 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8fc4c6b-d528-4701-8cc1-31553b942468-kube-api-access-9gpkr" (OuterVolumeSpecName: "kube-api-access-9gpkr") pod "f8fc4c6b-d528-4701-8cc1-31553b942468" (UID: "f8fc4c6b-d528-4701-8cc1-31553b942468"). InnerVolumeSpecName "kube-api-access-9gpkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.195939 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-config-data" (OuterVolumeSpecName: "config-data") pod "f8fc4c6b-d528-4701-8cc1-31553b942468" (UID: "f8fc4c6b-d528-4701-8cc1-31553b942468"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.201656 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8fc4c6b-d528-4701-8cc1-31553b942468" (UID: "f8fc4c6b-d528-4701-8cc1-31553b942468"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.259934 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gpkr\" (UniqueName: \"kubernetes.io/projected/f8fc4c6b-d528-4701-8cc1-31553b942468-kube-api-access-9gpkr\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.259974 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.259989 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fc4c6b-d528-4701-8cc1-31553b942468-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.282706 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.466912 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87f3ed51-e668-400a-b833-cb63cc5c5632-logs\") pod \"87f3ed51-e668-400a-b833-cb63cc5c5632\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.467029 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxgnl\" (UniqueName: \"kubernetes.io/projected/87f3ed51-e668-400a-b833-cb63cc5c5632-kube-api-access-rxgnl\") pod \"87f3ed51-e668-400a-b833-cb63cc5c5632\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.467149 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-combined-ca-bundle\") pod \"87f3ed51-e668-400a-b833-cb63cc5c5632\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.467211 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-nova-metadata-tls-certs\") pod \"87f3ed51-e668-400a-b833-cb63cc5c5632\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.467379 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f3ed51-e668-400a-b833-cb63cc5c5632-logs" (OuterVolumeSpecName: "logs") pod "87f3ed51-e668-400a-b833-cb63cc5c5632" (UID: "87f3ed51-e668-400a-b833-cb63cc5c5632"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.467447 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-config-data\") pod \"87f3ed51-e668-400a-b833-cb63cc5c5632\" (UID: \"87f3ed51-e668-400a-b833-cb63cc5c5632\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.468294 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87f3ed51-e668-400a-b833-cb63cc5c5632-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.472352 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f3ed51-e668-400a-b833-cb63cc5c5632-kube-api-access-rxgnl" (OuterVolumeSpecName: "kube-api-access-rxgnl") pod "87f3ed51-e668-400a-b833-cb63cc5c5632" (UID: "87f3ed51-e668-400a-b833-cb63cc5c5632"). InnerVolumeSpecName "kube-api-access-rxgnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.554369 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-config-data" (OuterVolumeSpecName: "config-data") pod "87f3ed51-e668-400a-b833-cb63cc5c5632" (UID: "87f3ed51-e668-400a-b833-cb63cc5c5632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.557805 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4a39100-2407-4654-aa43-fd39b72cb205" containerID="51de707cfc503fbaf6bb396a0534539ee8e82ba4e875c5ba2b80fab3769fde5b" exitCode=0 Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.557877 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4a39100-2407-4654-aa43-fd39b72cb205","Type":"ContainerDied","Data":"51de707cfc503fbaf6bb396a0534539ee8e82ba4e875c5ba2b80fab3769fde5b"} Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.577578 4688 generic.go:334] "Generic (PLEG): container finished" podID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerID="ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777" exitCode=0 Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.577954 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.577964 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"87f3ed51-e668-400a-b833-cb63cc5c5632","Type":"ContainerDied","Data":"ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777"} Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.578002 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"87f3ed51-e668-400a-b833-cb63cc5c5632","Type":"ContainerDied","Data":"be129de8e38a82b048ca5a8b0d927e45889c2ddc5eb3d734951194c9302b1787"} Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.578028 4688 scope.go:117] "RemoveContainer" containerID="ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.589207 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m2b7q"] Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.591674 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-metadata" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.591703 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-metadata" Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.591727 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82599c77-bc56-4a8b-a55a-e18645e80522" containerName="nova-manage" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.591733 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="82599c77-bc56-4a8b-a55a-e18645e80522" containerName="nova-manage" Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.591760 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8fc4c6b-d528-4701-8cc1-31553b942468" containerName="nova-scheduler-scheduler" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.591768 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8fc4c6b-d528-4701-8cc1-31553b942468" containerName="nova-scheduler-scheduler" Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.591787 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-log" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.591794 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-log" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.592283 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-metadata" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.592314 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" containerName="nova-metadata-log" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.592333 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="82599c77-bc56-4a8b-a55a-e18645e80522" containerName="nova-manage" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.592352 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8fc4c6b-d528-4701-8cc1-31553b942468" containerName="nova-scheduler-scheduler" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.595504 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.599365 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87f3ed51-e668-400a-b833-cb63cc5c5632" (UID: "87f3ed51-e668-400a-b833-cb63cc5c5632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.599784 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f8fc4c6b-d528-4701-8cc1-31553b942468","Type":"ContainerDied","Data":"c8784df624a6e16a84184eafe95f3311a423eeb3573f388bd9e682e5c247f200"} Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.599889 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.601668 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.602913 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxgnl\" (UniqueName: \"kubernetes.io/projected/87f3ed51-e668-400a-b833-cb63cc5c5632-kube-api-access-rxgnl\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.612010 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2b7q"] Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.645551 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "87f3ed51-e668-400a-b833-cb63cc5c5632" (UID: "87f3ed51-e668-400a-b833-cb63cc5c5632"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.741985 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.742032 4688 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/87f3ed51-e668-400a-b833-cb63cc5c5632-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.768382 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.779644 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.790154 4688 scope.go:117] "RemoveContainer" containerID="8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.802859 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.819069 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.819671 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-api" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.819685 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-api" Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.819712 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-log" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.819717 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-log" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.819939 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-log" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.819952 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" containerName="nova-api-api" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.820768 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.826735 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.843790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-utilities\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.844173 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-catalog-content\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.844436 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnc66\" (UniqueName: \"kubernetes.io/projected/76776205-9368-4420-a8fb-cd03793fd9e2-kube-api-access-nnc66\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.847830 4688 scope.go:117] "RemoveContainer" containerID="ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.847942 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.851657 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777\": container with ID starting with ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777 not found: ID does not exist" containerID="ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.851689 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777"} err="failed to get container status \"ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777\": rpc error: code = NotFound desc = could not find container \"ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777\": container with ID starting with ba0aa98332952050413f6ccb9892b0e3ea760b7f179c35aa7a413c805847b777 not found: ID does not exist" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.851716 4688 scope.go:117] "RemoveContainer" containerID="8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af" Jan 23 18:31:16 crc kubenswrapper[4688]: E0123 18:31:16.854177 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af\": container with ID starting with 8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af not found: ID does not exist" containerID="8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.854348 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af"} err="failed to get container status \"8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af\": rpc error: code = NotFound desc = could not find container \"8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af\": container with ID starting with 8229cadc37e2161a25fa870bfb98d2e2f16ce26f601fcb70887d5bebc3b566af not found: ID does not exist" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.854482 4688 scope.go:117] "RemoveContainer" containerID="f99a50b827820b5c40e1102dfc3836efac4f4838561b75d74c2e14efe6ad0c53" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.928569 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.945069 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.945996 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-combined-ca-bundle\") pod \"d4a39100-2407-4654-aa43-fd39b72cb205\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946048 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-public-tls-certs\") pod \"d4a39100-2407-4654-aa43-fd39b72cb205\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946089 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a39100-2407-4654-aa43-fd39b72cb205-logs\") pod \"d4a39100-2407-4654-aa43-fd39b72cb205\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946160 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-internal-tls-certs\") pod \"d4a39100-2407-4654-aa43-fd39b72cb205\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946261 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nzbm\" (UniqueName: \"kubernetes.io/projected/d4a39100-2407-4654-aa43-fd39b72cb205-kube-api-access-5nzbm\") pod \"d4a39100-2407-4654-aa43-fd39b72cb205\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946321 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-config-data\") pod \"d4a39100-2407-4654-aa43-fd39b72cb205\" (UID: \"d4a39100-2407-4654-aa43-fd39b72cb205\") " Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946866 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a39100-2407-4654-aa43-fd39b72cb205-logs" (OuterVolumeSpecName: "logs") pod "d4a39100-2407-4654-aa43-fd39b72cb205" (UID: "d4a39100-2407-4654-aa43-fd39b72cb205"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946892 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-catalog-content\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.946964 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt4g4\" (UniqueName: \"kubernetes.io/projected/ba03992a-5a6e-4f80-ad99-977cd7dc8854-kube-api-access-zt4g4\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.947025 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03992a-5a6e-4f80-ad99-977cd7dc8854-config-data\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.947089 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnc66\" (UniqueName: \"kubernetes.io/projected/76776205-9368-4420-a8fb-cd03793fd9e2-kube-api-access-nnc66\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.947125 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03992a-5a6e-4f80-ad99-977cd7dc8854-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.947292 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-utilities\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.947433 4688 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4a39100-2407-4654-aa43-fd39b72cb205-logs\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.947734 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-catalog-content\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.948512 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-utilities\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.954609 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a39100-2407-4654-aa43-fd39b72cb205-kube-api-access-5nzbm" (OuterVolumeSpecName: "kube-api-access-5nzbm") pod "d4a39100-2407-4654-aa43-fd39b72cb205" (UID: "d4a39100-2407-4654-aa43-fd39b72cb205"). InnerVolumeSpecName "kube-api-access-5nzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.967670 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.970340 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.972842 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.973507 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.977009 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:31:16 crc kubenswrapper[4688]: I0123 18:31:16.977282 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnc66\" (UniqueName: \"kubernetes.io/projected/76776205-9368-4420-a8fb-cd03793fd9e2-kube-api-access-nnc66\") pod \"certified-operators-m2b7q\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.007778 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4a39100-2407-4654-aa43-fd39b72cb205" (UID: "d4a39100-2407-4654-aa43-fd39b72cb205"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.018207 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-config-data" (OuterVolumeSpecName: "config-data") pod "d4a39100-2407-4654-aa43-fd39b72cb205" (UID: "d4a39100-2407-4654-aa43-fd39b72cb205"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.025221 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4a39100-2407-4654-aa43-fd39b72cb205" (UID: "d4a39100-2407-4654-aa43-fd39b72cb205"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.038774 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4a39100-2407-4654-aa43-fd39b72cb205" (UID: "d4a39100-2407-4654-aa43-fd39b72cb205"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.049128 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt4g4\" (UniqueName: \"kubernetes.io/projected/ba03992a-5a6e-4f80-ad99-977cd7dc8854-kube-api-access-zt4g4\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.049207 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03992a-5a6e-4f80-ad99-977cd7dc8854-config-data\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.049266 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03992a-5a6e-4f80-ad99-977cd7dc8854-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.050484 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.050684 4688 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.050707 4688 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.050720 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nzbm\" (UniqueName: \"kubernetes.io/projected/d4a39100-2407-4654-aa43-fd39b72cb205-kube-api-access-5nzbm\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.050760 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a39100-2407-4654-aa43-fd39b72cb205-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.054667 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03992a-5a6e-4f80-ad99-977cd7dc8854-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.056631 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03992a-5a6e-4f80-ad99-977cd7dc8854-config-data\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.067111 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt4g4\" (UniqueName: \"kubernetes.io/projected/ba03992a-5a6e-4f80-ad99-977cd7dc8854-kube-api-access-zt4g4\") pod \"nova-scheduler-0\" (UID: \"ba03992a-5a6e-4f80-ad99-977cd7dc8854\") " pod="openstack/nova-scheduler-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.092734 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.153905 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-config-data\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.153972 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.154049 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdf4\" (UniqueName: \"kubernetes.io/projected/e8cf51a7-6a79-4d01-8b66-036e1f113df2-kube-api-access-kwdf4\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.154098 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8cf51a7-6a79-4d01-8b66-036e1f113df2-logs\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.154132 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.156277 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.256445 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-config-data\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.256810 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.256884 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwdf4\" (UniqueName: \"kubernetes.io/projected/e8cf51a7-6a79-4d01-8b66-036e1f113df2-kube-api-access-kwdf4\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.257302 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8cf51a7-6a79-4d01-8b66-036e1f113df2-logs\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.257355 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.257750 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8cf51a7-6a79-4d01-8b66-036e1f113df2-logs\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.265638 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.266176 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-config-data\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.270809 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8cf51a7-6a79-4d01-8b66-036e1f113df2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.284049 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwdf4\" (UniqueName: \"kubernetes.io/projected/e8cf51a7-6a79-4d01-8b66-036e1f113df2-kube-api-access-kwdf4\") pod \"nova-metadata-0\" (UID: \"e8cf51a7-6a79-4d01-8b66-036e1f113df2\") " pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.367177 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.373911 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f3ed51-e668-400a-b833-cb63cc5c5632" path="/var/lib/kubelet/pods/87f3ed51-e668-400a-b833-cb63cc5c5632/volumes" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.374563 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8fc4c6b-d528-4701-8cc1-31553b942468" path="/var/lib/kubelet/pods/f8fc4c6b-d528-4701-8cc1-31553b942468/volumes" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.656270 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4a39100-2407-4654-aa43-fd39b72cb205","Type":"ContainerDied","Data":"c2b669420dccbb6f2f0e1c4dfddc595c151ed7ef0044b10db81e78299ac83d9d"} Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.656595 4688 scope.go:117] "RemoveContainer" containerID="51de707cfc503fbaf6bb396a0534539ee8e82ba4e875c5ba2b80fab3769fde5b" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.656757 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.666774 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2b7q"] Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.770257 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.813258 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.826178 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.832075 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.835659 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.836485 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.847951 4688 scope.go:117] "RemoveContainer" containerID="12efca1ff5a4c4d63df10e8025c5f19431d3ff243d0612af98cbc31317862b3a" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.848175 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.848410 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.861320 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.993040 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.993440 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s657g\" (UniqueName: \"kubernetes.io/projected/e434f347-02aa-410e-a0c7-bcc65dee86ad-kube-api-access-s657g\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.993498 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-public-tls-certs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.993526 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-config-data\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.993594 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e434f347-02aa-410e-a0c7-bcc65dee86ad-logs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:17 crc kubenswrapper[4688]: I0123 18:31:17.993697 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.095734 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s657g\" (UniqueName: \"kubernetes.io/projected/e434f347-02aa-410e-a0c7-bcc65dee86ad-kube-api-access-s657g\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.095830 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-public-tls-certs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.095905 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-config-data\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.095989 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e434f347-02aa-410e-a0c7-bcc65dee86ad-logs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.096104 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.096245 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.097481 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e434f347-02aa-410e-a0c7-bcc65dee86ad-logs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.103332 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.103385 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.105373 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.112493 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-public-tls-certs\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.113009 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.114932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e434f347-02aa-410e-a0c7-bcc65dee86ad-config-data\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.117458 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.119082 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s657g\" (UniqueName: \"kubernetes.io/projected/e434f347-02aa-410e-a0c7-bcc65dee86ad-kube-api-access-s657g\") pod \"nova-api-0\" (UID: \"e434f347-02aa-410e-a0c7-bcc65dee86ad\") " pod="openstack/nova-api-0" Jan 23 18:31:18 crc kubenswrapper[4688]: W0123 18:31:18.121142 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8cf51a7_6a79_4d01_8b66_036e1f113df2.slice/crio-a3d81591973f2923e708b6e3833946b2c7be514aa35b3672bed53ce0b76e8ba3 WatchSource:0}: Error finding container a3d81591973f2923e708b6e3833946b2c7be514aa35b3672bed53ce0b76e8ba3: Status 404 returned error can't find the container with id a3d81591973f2923e708b6e3833946b2c7be514aa35b3672bed53ce0b76e8ba3 Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.160707 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:31:18 crc kubenswrapper[4688]: I0123 18:31:18.319082 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.685141 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba03992a-5a6e-4f80-ad99-977cd7dc8854","Type":"ContainerStarted","Data":"f9b86dfcdd76525f3dd0969beb2bbbdc0ccb15beff44ddb1e64d6eb5db966167"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.685707 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba03992a-5a6e-4f80-ad99-977cd7dc8854","Type":"ContainerStarted","Data":"e1770fccca5c1060f055a84b72da6bf913cbcb9226a76f80ed03b00da49709f5"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.691611 4688 generic.go:334] "Generic (PLEG): container finished" podID="76776205-9368-4420-a8fb-cd03793fd9e2" containerID="6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576" exitCode=0 Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.691705 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2b7q" event={"ID":"76776205-9368-4420-a8fb-cd03793fd9e2","Type":"ContainerDied","Data":"6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.691866 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2b7q" event={"ID":"76776205-9368-4420-a8fb-cd03793fd9e2","Type":"ContainerStarted","Data":"49b12240e5cf29c5c8211142c154dc41a5a5529d84b6b80adb9861ab51cf3dea"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.694572 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8cf51a7-6a79-4d01-8b66-036e1f113df2","Type":"ContainerStarted","Data":"d144d6ea6c4f8d685234c62602771d3246c6cf959ea56bbd93779ae663b87b58"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.694630 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8cf51a7-6a79-4d01-8b66-036e1f113df2","Type":"ContainerStarted","Data":"a3d81591973f2923e708b6e3833946b2c7be514aa35b3672bed53ce0b76e8ba3"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.715985 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.715965114 podStartE2EDuration="2.715965114s" podCreationTimestamp="2026-01-23 18:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:31:18.709736904 +0000 UTC m=+1473.705561345" watchObservedRunningTime="2026-01-23 18:31:18.715965114 +0000 UTC m=+1473.711789555" Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.756630 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:18.862201 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 18:31:19 crc kubenswrapper[4688]: W0123 18:31:18.862821 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode434f347_02aa_410e_a0c7_bcc65dee86ad.slice/crio-8a48a6c4f6bfebbd7c711701650a527e82c427ce68d18cce0a1296ce8a73e23f WatchSource:0}: Error finding container 8a48a6c4f6bfebbd7c711701650a527e82c427ce68d18cce0a1296ce8a73e23f: Status 404 returned error can't find the container with id 8a48a6c4f6bfebbd7c711701650a527e82c427ce68d18cce0a1296ce8a73e23f Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:19.385475 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a39100-2407-4654-aa43-fd39b72cb205" path="/var/lib/kubelet/pods/d4a39100-2407-4654-aa43-fd39b72cb205/volumes" Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:19.725554 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e434f347-02aa-410e-a0c7-bcc65dee86ad","Type":"ContainerStarted","Data":"52347fa3562b31d5ac51002eda4c3342e98f7422fa8b87ea6b0b49ea3073eafb"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:19.725924 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e434f347-02aa-410e-a0c7-bcc65dee86ad","Type":"ContainerStarted","Data":"74be8afd2aba22044e3a244d5fda29f9b497e7a8b0883ca3914538f8e7797354"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:19.725942 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e434f347-02aa-410e-a0c7-bcc65dee86ad","Type":"ContainerStarted","Data":"8a48a6c4f6bfebbd7c711701650a527e82c427ce68d18cce0a1296ce8a73e23f"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:19.731531 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8cf51a7-6a79-4d01-8b66-036e1f113df2","Type":"ContainerStarted","Data":"7a67ec5a399a3ce601933cead0d93434158f5a431a3ea1219cbbd4b0587b9b9e"} Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:19.769679 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7696529119999997 podStartE2EDuration="2.769652912s" podCreationTimestamp="2026-01-23 18:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:31:19.752756895 +0000 UTC m=+1474.748581346" watchObservedRunningTime="2026-01-23 18:31:19.769652912 +0000 UTC m=+1474.765477353" Jan 23 18:31:19 crc kubenswrapper[4688]: I0123 18:31:19.780949 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.780926136 podStartE2EDuration="3.780926136s" podCreationTimestamp="2026-01-23 18:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:31:19.777902549 +0000 UTC m=+1474.773726990" watchObservedRunningTime="2026-01-23 18:31:19.780926136 +0000 UTC m=+1474.776750577" Jan 23 18:31:20 crc kubenswrapper[4688]: I0123 18:31:20.530615 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn944"] Jan 23 18:31:20 crc kubenswrapper[4688]: I0123 18:31:20.744891 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2b7q" event={"ID":"76776205-9368-4420-a8fb-cd03793fd9e2","Type":"ContainerStarted","Data":"ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff"} Jan 23 18:31:20 crc kubenswrapper[4688]: I0123 18:31:20.745058 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nn944" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="registry-server" containerID="cri-o://4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a" gracePeriod=2 Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.434174 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.605954 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz9nz\" (UniqueName: \"kubernetes.io/projected/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-kube-api-access-rz9nz\") pod \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.606038 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-utilities\") pod \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.606056 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-catalog-content\") pod \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\" (UID: \"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e\") " Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.607610 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-utilities" (OuterVolumeSpecName: "utilities") pod "6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" (UID: "6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.626689 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-kube-api-access-rz9nz" (OuterVolumeSpecName: "kube-api-access-rz9nz") pod "6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" (UID: "6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e"). InnerVolumeSpecName "kube-api-access-rz9nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.712753 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz9nz\" (UniqueName: \"kubernetes.io/projected/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-kube-api-access-rz9nz\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.712791 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.747616 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" (UID: "6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.773541 4688 generic.go:334] "Generic (PLEG): container finished" podID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerID="4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a" exitCode=0 Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.773771 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nn944" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.778257 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn944" event={"ID":"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e","Type":"ContainerDied","Data":"4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a"} Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.778317 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nn944" event={"ID":"6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e","Type":"ContainerDied","Data":"820516e6ef66f8c74f7f8f105332b716154948194bbf1794514d6eb79f986b4f"} Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.778337 4688 scope.go:117] "RemoveContainer" containerID="4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.803646 4688 generic.go:334] "Generic (PLEG): container finished" podID="76776205-9368-4420-a8fb-cd03793fd9e2" containerID="ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff" exitCode=0 Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.803694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2b7q" event={"ID":"76776205-9368-4420-a8fb-cd03793fd9e2","Type":"ContainerDied","Data":"ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff"} Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.820657 4688 scope.go:117] "RemoveContainer" containerID="41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.821347 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.859248 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nn944"] Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.878745 4688 scope.go:117] "RemoveContainer" containerID="e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.880156 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nn944"] Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.934596 4688 scope.go:117] "RemoveContainer" containerID="4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a" Jan 23 18:31:21 crc kubenswrapper[4688]: E0123 18:31:21.942356 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a\": container with ID starting with 4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a not found: ID does not exist" containerID="4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.942615 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a"} err="failed to get container status \"4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a\": rpc error: code = NotFound desc = could not find container \"4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a\": container with ID starting with 4f9804d406272d50fa466bb059b33b314fda85634aab162c1d26cb2499ca463a not found: ID does not exist" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.942644 4688 scope.go:117] "RemoveContainer" containerID="41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69" Jan 23 18:31:21 crc kubenswrapper[4688]: E0123 18:31:21.946259 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69\": container with ID starting with 41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69 not found: ID does not exist" containerID="41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.946300 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69"} err="failed to get container status \"41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69\": rpc error: code = NotFound desc = could not find container \"41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69\": container with ID starting with 41481a20afd2aa1edfbb5f16e3a2d0a7715be9cc22cd761bbbed47158abccb69 not found: ID does not exist" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.946324 4688 scope.go:117] "RemoveContainer" containerID="e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af" Jan 23 18:31:21 crc kubenswrapper[4688]: E0123 18:31:21.948754 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af\": container with ID starting with e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af not found: ID does not exist" containerID="e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af" Jan 23 18:31:21 crc kubenswrapper[4688]: I0123 18:31:21.948795 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af"} err="failed to get container status \"e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af\": rpc error: code = NotFound desc = could not find container \"e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af\": container with ID starting with e6f1da1298a7bad2085de3f3fc836af518f5fa9d2ce9a997aa7ba5edd64762af not found: ID does not exist" Jan 23 18:31:22 crc kubenswrapper[4688]: I0123 18:31:22.157266 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 18:31:22 crc kubenswrapper[4688]: I0123 18:31:22.367307 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:31:22 crc kubenswrapper[4688]: I0123 18:31:22.367818 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 18:31:22 crc kubenswrapper[4688]: I0123 18:31:22.858500 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2b7q" event={"ID":"76776205-9368-4420-a8fb-cd03793fd9e2","Type":"ContainerStarted","Data":"d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669"} Jan 23 18:31:22 crc kubenswrapper[4688]: I0123 18:31:22.889824 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m2b7q" podStartSLOduration=3.02657059 podStartE2EDuration="6.889801487s" podCreationTimestamp="2026-01-23 18:31:16 +0000 UTC" firstStartedPulling="2026-01-23 18:31:18.693720533 +0000 UTC m=+1473.689544964" lastFinishedPulling="2026-01-23 18:31:22.55695142 +0000 UTC m=+1477.552775861" observedRunningTime="2026-01-23 18:31:22.879963543 +0000 UTC m=+1477.875787984" watchObservedRunningTime="2026-01-23 18:31:22.889801487 +0000 UTC m=+1477.885625928" Jan 23 18:31:23 crc kubenswrapper[4688]: I0123 18:31:23.369055 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" path="/var/lib/kubelet/pods/6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e/volumes" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.093941 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.094293 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.147863 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.157556 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.194880 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.390857 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.391218 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.950938 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 18:31:27 crc kubenswrapper[4688]: I0123 18:31:27.958699 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.320135 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.320210 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.380441 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8cf51a7-6a79-4d01-8b66-036e1f113df2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.380479 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8cf51a7-6a79-4d01-8b66-036e1f113df2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.605046 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ln28r"] Jan 23 18:31:28 crc kubenswrapper[4688]: E0123 18:31:28.605692 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="extract-content" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.605719 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="extract-content" Jan 23 18:31:28 crc kubenswrapper[4688]: E0123 18:31:28.605767 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="registry-server" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.605777 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="registry-server" Jan 23 18:31:28 crc kubenswrapper[4688]: E0123 18:31:28.605795 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="extract-utilities" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.605807 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="extract-utilities" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.606074 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4d13a7-3f3e-4e44-b4fc-9e6122cb265e" containerName="registry-server" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.611420 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.628122 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln28r"] Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.668699 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.788009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-catalog-content\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.788638 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-utilities\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.788764 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvbmx\" (UniqueName: \"kubernetes.io/projected/5c06d993-70f8-4285-8f43-f8f851f1afad-kube-api-access-dvbmx\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.891028 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-catalog-content\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.891588 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-catalog-content\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.891930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-utilities\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.892172 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvbmx\" (UniqueName: \"kubernetes.io/projected/5c06d993-70f8-4285-8f43-f8f851f1afad-kube-api-access-dvbmx\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.892218 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-utilities\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.927359 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvbmx\" (UniqueName: \"kubernetes.io/projected/5c06d993-70f8-4285-8f43-f8f851f1afad-kube-api-access-dvbmx\") pod \"redhat-marketplace-ln28r\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:28 crc kubenswrapper[4688]: I0123 18:31:28.937255 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:29 crc kubenswrapper[4688]: I0123 18:31:29.335446 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e434f347-02aa-410e-a0c7-bcc65dee86ad" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:31:29 crc kubenswrapper[4688]: I0123 18:31:29.335446 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e434f347-02aa-410e-a0c7-bcc65dee86ad" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:31:29 crc kubenswrapper[4688]: W0123 18:31:29.474121 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c06d993_70f8_4285_8f43_f8f851f1afad.slice/crio-13449bd8495fe7997aa26372a242a0c1289b3c52ec83555b1a47313618047e9a WatchSource:0}: Error finding container 13449bd8495fe7997aa26372a242a0c1289b3c52ec83555b1a47313618047e9a: Status 404 returned error can't find the container with id 13449bd8495fe7997aa26372a242a0c1289b3c52ec83555b1a47313618047e9a Jan 23 18:31:29 crc kubenswrapper[4688]: I0123 18:31:29.481028 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln28r"] Jan 23 18:31:29 crc kubenswrapper[4688]: I0123 18:31:29.931986 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerID="76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b" exitCode=0 Jan 23 18:31:29 crc kubenswrapper[4688]: I0123 18:31:29.932033 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln28r" event={"ID":"5c06d993-70f8-4285-8f43-f8f851f1afad","Type":"ContainerDied","Data":"76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b"} Jan 23 18:31:29 crc kubenswrapper[4688]: I0123 18:31:29.932061 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln28r" event={"ID":"5c06d993-70f8-4285-8f43-f8f851f1afad","Type":"ContainerStarted","Data":"13449bd8495fe7997aa26372a242a0c1289b3c52ec83555b1a47313618047e9a"} Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.388296 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2b7q"] Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.388622 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m2b7q" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="registry-server" containerID="cri-o://d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669" gracePeriod=2 Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.936622 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.942439 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln28r" event={"ID":"5c06d993-70f8-4285-8f43-f8f851f1afad","Type":"ContainerStarted","Data":"6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd"} Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.946160 4688 generic.go:334] "Generic (PLEG): container finished" podID="76776205-9368-4420-a8fb-cd03793fd9e2" containerID="d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669" exitCode=0 Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.946280 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2b7q" Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.946292 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2b7q" event={"ID":"76776205-9368-4420-a8fb-cd03793fd9e2","Type":"ContainerDied","Data":"d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669"} Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.946445 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2b7q" event={"ID":"76776205-9368-4420-a8fb-cd03793fd9e2","Type":"ContainerDied","Data":"49b12240e5cf29c5c8211142c154dc41a5a5529d84b6b80adb9861ab51cf3dea"} Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.946525 4688 scope.go:117] "RemoveContainer" containerID="d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669" Jan 23 18:31:30 crc kubenswrapper[4688]: I0123 18:31:30.991338 4688 scope.go:117] "RemoveContainer" containerID="ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.024124 4688 scope.go:117] "RemoveContainer" containerID="6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.070447 4688 scope.go:117] "RemoveContainer" containerID="d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669" Jan 23 18:31:31 crc kubenswrapper[4688]: E0123 18:31:31.071042 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669\": container with ID starting with d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669 not found: ID does not exist" containerID="d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.071087 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669"} err="failed to get container status \"d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669\": rpc error: code = NotFound desc = could not find container \"d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669\": container with ID starting with d1ce51a62250671acc7ebefc545514ef0f601689d67e057bb3d1f3fea84c8669 not found: ID does not exist" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.071117 4688 scope.go:117] "RemoveContainer" containerID="ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff" Jan 23 18:31:31 crc kubenswrapper[4688]: E0123 18:31:31.071826 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff\": container with ID starting with ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff not found: ID does not exist" containerID="ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.071861 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff"} err="failed to get container status \"ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff\": rpc error: code = NotFound desc = could not find container \"ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff\": container with ID starting with ef26504151dc2a3653f832e51afde818a1243fc7e0d38232548d7b88fbd578ff not found: ID does not exist" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.071887 4688 scope.go:117] "RemoveContainer" containerID="6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576" Jan 23 18:31:31 crc kubenswrapper[4688]: E0123 18:31:31.072254 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576\": container with ID starting with 6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576 not found: ID does not exist" containerID="6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.072288 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576"} err="failed to get container status \"6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576\": rpc error: code = NotFound desc = could not find container \"6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576\": container with ID starting with 6f318c79c6c0f2a12eeb9079c23df6c3b27a89ba24d0bef30859e385a5e7b576 not found: ID does not exist" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.117617 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-utilities\") pod \"76776205-9368-4420-a8fb-cd03793fd9e2\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.118080 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnc66\" (UniqueName: \"kubernetes.io/projected/76776205-9368-4420-a8fb-cd03793fd9e2-kube-api-access-nnc66\") pod \"76776205-9368-4420-a8fb-cd03793fd9e2\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.118231 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-catalog-content\") pod \"76776205-9368-4420-a8fb-cd03793fd9e2\" (UID: \"76776205-9368-4420-a8fb-cd03793fd9e2\") " Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.118828 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-utilities" (OuterVolumeSpecName: "utilities") pod "76776205-9368-4420-a8fb-cd03793fd9e2" (UID: "76776205-9368-4420-a8fb-cd03793fd9e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.123837 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76776205-9368-4420-a8fb-cd03793fd9e2-kube-api-access-nnc66" (OuterVolumeSpecName: "kube-api-access-nnc66") pod "76776205-9368-4420-a8fb-cd03793fd9e2" (UID: "76776205-9368-4420-a8fb-cd03793fd9e2"). InnerVolumeSpecName "kube-api-access-nnc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.169272 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76776205-9368-4420-a8fb-cd03793fd9e2" (UID: "76776205-9368-4420-a8fb-cd03793fd9e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.220762 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnc66\" (UniqueName: \"kubernetes.io/projected/76776205-9368-4420-a8fb-cd03793fd9e2-kube-api-access-nnc66\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.220801 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.220814 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76776205-9368-4420-a8fb-cd03793fd9e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.313411 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2b7q"] Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.322757 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m2b7q"] Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.372212 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" path="/var/lib/kubelet/pods/76776205-9368-4420-a8fb-cd03793fd9e2/volumes" Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.959307 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerID="6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd" exitCode=0 Jan 23 18:31:31 crc kubenswrapper[4688]: I0123 18:31:31.959412 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln28r" event={"ID":"5c06d993-70f8-4285-8f43-f8f851f1afad","Type":"ContainerDied","Data":"6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd"} Jan 23 18:31:32 crc kubenswrapper[4688]: I0123 18:31:32.976786 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln28r" event={"ID":"5c06d993-70f8-4285-8f43-f8f851f1afad","Type":"ContainerStarted","Data":"303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb"} Jan 23 18:31:36 crc kubenswrapper[4688]: I0123 18:31:36.964786 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:31:36 crc kubenswrapper[4688]: I0123 18:31:36.965402 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:31:36 crc kubenswrapper[4688]: I0123 18:31:36.965458 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:31:36 crc kubenswrapper[4688]: I0123 18:31:36.966445 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efff7e73d0e1ac0534ebe075a3a122ddc634e7b49a03f861c06609aa4fb7858e"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:31:36 crc kubenswrapper[4688]: I0123 18:31:36.966493 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://efff7e73d0e1ac0534ebe075a3a122ddc634e7b49a03f861c06609aa4fb7858e" gracePeriod=600 Jan 23 18:31:37 crc kubenswrapper[4688]: I0123 18:31:37.388788 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:31:37 crc kubenswrapper[4688]: I0123 18:31:37.389684 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 18:31:37 crc kubenswrapper[4688]: I0123 18:31:37.402913 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:31:37 crc kubenswrapper[4688]: I0123 18:31:37.417040 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ln28r" podStartSLOduration=6.889121875 podStartE2EDuration="9.417011262s" podCreationTimestamp="2026-01-23 18:31:28 +0000 UTC" firstStartedPulling="2026-01-23 18:31:29.935132232 +0000 UTC m=+1484.930956673" lastFinishedPulling="2026-01-23 18:31:32.463021619 +0000 UTC m=+1487.458846060" observedRunningTime="2026-01-23 18:31:33.006835022 +0000 UTC m=+1488.002659473" watchObservedRunningTime="2026-01-23 18:31:37.417011262 +0000 UTC m=+1492.412835693" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.037544 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="efff7e73d0e1ac0534ebe075a3a122ddc634e7b49a03f861c06609aa4fb7858e" exitCode=0 Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.037635 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"efff7e73d0e1ac0534ebe075a3a122ddc634e7b49a03f861c06609aa4fb7858e"} Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.039396 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58"} Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.039430 4688 scope.go:117] "RemoveContainer" containerID="c61421e0532a5bce13261538943da0f43d79b47405f6be50cfb642634fbe028e" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.047215 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.337394 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.338561 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.339484 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.363302 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.937628 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:38 crc kubenswrapper[4688]: I0123 18:31:38.937982 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:39 crc kubenswrapper[4688]: I0123 18:31:39.000879 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:39 crc kubenswrapper[4688]: I0123 18:31:39.054785 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 18:31:39 crc kubenswrapper[4688]: I0123 18:31:39.069413 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 18:31:39 crc kubenswrapper[4688]: I0123 18:31:39.143335 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:39 crc kubenswrapper[4688]: I0123 18:31:39.241596 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln28r"] Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.077896 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ln28r" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="registry-server" containerID="cri-o://303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb" gracePeriod=2 Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.587429 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.753806 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvbmx\" (UniqueName: \"kubernetes.io/projected/5c06d993-70f8-4285-8f43-f8f851f1afad-kube-api-access-dvbmx\") pod \"5c06d993-70f8-4285-8f43-f8f851f1afad\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.753898 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-utilities\") pod \"5c06d993-70f8-4285-8f43-f8f851f1afad\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.753975 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-catalog-content\") pod \"5c06d993-70f8-4285-8f43-f8f851f1afad\" (UID: \"5c06d993-70f8-4285-8f43-f8f851f1afad\") " Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.755135 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-utilities" (OuterVolumeSpecName: "utilities") pod "5c06d993-70f8-4285-8f43-f8f851f1afad" (UID: "5c06d993-70f8-4285-8f43-f8f851f1afad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.761711 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c06d993-70f8-4285-8f43-f8f851f1afad-kube-api-access-dvbmx" (OuterVolumeSpecName: "kube-api-access-dvbmx") pod "5c06d993-70f8-4285-8f43-f8f851f1afad" (UID: "5c06d993-70f8-4285-8f43-f8f851f1afad"). InnerVolumeSpecName "kube-api-access-dvbmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.785826 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c06d993-70f8-4285-8f43-f8f851f1afad" (UID: "5c06d993-70f8-4285-8f43-f8f851f1afad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.857140 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvbmx\" (UniqueName: \"kubernetes.io/projected/5c06d993-70f8-4285-8f43-f8f851f1afad-kube-api-access-dvbmx\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.857201 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:41 crc kubenswrapper[4688]: I0123 18:31:41.857221 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c06d993-70f8-4285-8f43-f8f851f1afad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.091343 4688 generic.go:334] "Generic (PLEG): container finished" podID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerID="303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb" exitCode=0 Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.091398 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln28r" event={"ID":"5c06d993-70f8-4285-8f43-f8f851f1afad","Type":"ContainerDied","Data":"303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb"} Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.091438 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ln28r" event={"ID":"5c06d993-70f8-4285-8f43-f8f851f1afad","Type":"ContainerDied","Data":"13449bd8495fe7997aa26372a242a0c1289b3c52ec83555b1a47313618047e9a"} Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.091461 4688 scope.go:117] "RemoveContainer" containerID="303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.092590 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ln28r" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.119616 4688 scope.go:117] "RemoveContainer" containerID="6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.127736 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln28r"] Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.141442 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ln28r"] Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.147843 4688 scope.go:117] "RemoveContainer" containerID="76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.226331 4688 scope.go:117] "RemoveContainer" containerID="303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb" Jan 23 18:31:42 crc kubenswrapper[4688]: E0123 18:31:42.226860 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb\": container with ID starting with 303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb not found: ID does not exist" containerID="303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.226921 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb"} err="failed to get container status \"303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb\": rpc error: code = NotFound desc = could not find container \"303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb\": container with ID starting with 303a89be499eb06138aa071e861f6f0223226546d28e522d30c18da559f3d3fb not found: ID does not exist" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.226957 4688 scope.go:117] "RemoveContainer" containerID="6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd" Jan 23 18:31:42 crc kubenswrapper[4688]: E0123 18:31:42.228414 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd\": container with ID starting with 6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd not found: ID does not exist" containerID="6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.228471 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd"} err="failed to get container status \"6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd\": rpc error: code = NotFound desc = could not find container \"6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd\": container with ID starting with 6a0c6d41661f0fd0c31c60a9a0392fab40ea0aef9fd7613016f05f70b93cabcd not found: ID does not exist" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.228501 4688 scope.go:117] "RemoveContainer" containerID="76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b" Jan 23 18:31:42 crc kubenswrapper[4688]: E0123 18:31:42.228944 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b\": container with ID starting with 76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b not found: ID does not exist" containerID="76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b" Jan 23 18:31:42 crc kubenswrapper[4688]: I0123 18:31:42.229008 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b"} err="failed to get container status \"76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b\": rpc error: code = NotFound desc = could not find container \"76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b\": container with ID starting with 76e8cb461c130af4f674f16b3ea2252c583087f2213a60cac6e35af28248599b not found: ID does not exist" Jan 23 18:31:43 crc kubenswrapper[4688]: I0123 18:31:43.372833 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" path="/var/lib/kubelet/pods/5c06d993-70f8-4285-8f43-f8f851f1afad/volumes" Jan 23 18:31:47 crc kubenswrapper[4688]: I0123 18:31:47.409361 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.163168 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kvptb"] Jan 23 18:31:49 crc kubenswrapper[4688]: E0123 18:31:49.164092 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="extract-utilities" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.164116 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="extract-utilities" Jan 23 18:31:49 crc kubenswrapper[4688]: E0123 18:31:49.168499 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="registry-server" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.168561 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="registry-server" Jan 23 18:31:49 crc kubenswrapper[4688]: E0123 18:31:49.168659 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="registry-server" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.168670 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="registry-server" Jan 23 18:31:49 crc kubenswrapper[4688]: E0123 18:31:49.168683 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="extract-content" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.168690 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="extract-content" Jan 23 18:31:49 crc kubenswrapper[4688]: E0123 18:31:49.168705 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="extract-content" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.168712 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="extract-content" Jan 23 18:31:49 crc kubenswrapper[4688]: E0123 18:31:49.168731 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="extract-utilities" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.168739 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="extract-utilities" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.169265 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c06d993-70f8-4285-8f43-f8f851f1afad" containerName="registry-server" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.169294 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="76776205-9368-4420-a8fb-cd03793fd9e2" containerName="registry-server" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.171311 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.188572 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvptb"] Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.383110 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-utilities\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.383696 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-catalog-content\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.383985 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdds\" (UniqueName: \"kubernetes.io/projected/45908950-31bd-40fa-a99e-531e4b867ab0-kube-api-access-cwdds\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.424120 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.486437 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-utilities\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.486602 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-catalog-content\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.486724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdds\" (UniqueName: \"kubernetes.io/projected/45908950-31bd-40fa-a99e-531e4b867ab0-kube-api-access-cwdds\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.487033 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-utilities\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.487257 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-catalog-content\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.510304 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdds\" (UniqueName: \"kubernetes.io/projected/45908950-31bd-40fa-a99e-531e4b867ab0-kube-api-access-cwdds\") pod \"community-operators-kvptb\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:49 crc kubenswrapper[4688]: I0123 18:31:49.514769 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:50 crc kubenswrapper[4688]: I0123 18:31:50.080975 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvptb"] Jan 23 18:31:50 crc kubenswrapper[4688]: W0123 18:31:50.090432 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45908950_31bd_40fa_a99e_531e4b867ab0.slice/crio-c3807ff9c8d68dd3ab1369647057b4793bd065a7a0121314a527c3656157b844 WatchSource:0}: Error finding container c3807ff9c8d68dd3ab1369647057b4793bd065a7a0121314a527c3656157b844: Status 404 returned error can't find the container with id c3807ff9c8d68dd3ab1369647057b4793bd065a7a0121314a527c3656157b844 Jan 23 18:31:50 crc kubenswrapper[4688]: I0123 18:31:50.234315 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvptb" event={"ID":"45908950-31bd-40fa-a99e-531e4b867ab0","Type":"ContainerStarted","Data":"c3807ff9c8d68dd3ab1369647057b4793bd065a7a0121314a527c3656157b844"} Jan 23 18:31:51 crc kubenswrapper[4688]: I0123 18:31:51.251301 4688 generic.go:334] "Generic (PLEG): container finished" podID="45908950-31bd-40fa-a99e-531e4b867ab0" containerID="d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91" exitCode=0 Jan 23 18:31:51 crc kubenswrapper[4688]: I0123 18:31:51.251404 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvptb" event={"ID":"45908950-31bd-40fa-a99e-531e4b867ab0","Type":"ContainerDied","Data":"d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91"} Jan 23 18:31:52 crc kubenswrapper[4688]: I0123 18:31:52.598036 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerName="rabbitmq" containerID="cri-o://2c3e96b5f5164328bdb03f50cbc2bbda53492fe97ee3c38f936389940e89ec51" gracePeriod=604795 Jan 23 18:31:52 crc kubenswrapper[4688]: I0123 18:31:52.983836 4688 scope.go:117] "RemoveContainer" containerID="051ed7968e6fd61b3718018de4019cf76ee819bb9d22aa2c7daa44a1adf025cc" Jan 23 18:31:53 crc kubenswrapper[4688]: I0123 18:31:53.276331 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvptb" event={"ID":"45908950-31bd-40fa-a99e-531e4b867ab0","Type":"ContainerStarted","Data":"b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849"} Jan 23 18:31:54 crc kubenswrapper[4688]: I0123 18:31:54.290095 4688 generic.go:334] "Generic (PLEG): container finished" podID="45908950-31bd-40fa-a99e-531e4b867ab0" containerID="b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849" exitCode=0 Jan 23 18:31:54 crc kubenswrapper[4688]: I0123 18:31:54.290208 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvptb" event={"ID":"45908950-31bd-40fa-a99e-531e4b867ab0","Type":"ContainerDied","Data":"b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849"} Jan 23 18:31:54 crc kubenswrapper[4688]: I0123 18:31:54.984106 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerName="rabbitmq" containerID="cri-o://2f30879cdd11b516d8167680b37abb43efca558cb0f015fe16164231564e96ef" gracePeriod=604795 Jan 23 18:31:55 crc kubenswrapper[4688]: I0123 18:31:55.302972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvptb" event={"ID":"45908950-31bd-40fa-a99e-531e4b867ab0","Type":"ContainerStarted","Data":"f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff"} Jan 23 18:31:57 crc kubenswrapper[4688]: I0123 18:31:57.613811 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 23 18:31:58 crc kubenswrapper[4688]: I0123 18:31:58.094178 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.351754 4688 generic.go:334] "Generic (PLEG): container finished" podID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerID="2c3e96b5f5164328bdb03f50cbc2bbda53492fe97ee3c38f936389940e89ec51" exitCode=0 Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.351877 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d36723-6a61-470a-9107-e5e8cf1c49a0","Type":"ContainerDied","Data":"2c3e96b5f5164328bdb03f50cbc2bbda53492fe97ee3c38f936389940e89ec51"} Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.516304 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.516674 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.577959 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.608808 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.621779 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-config-data\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.621929 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622006 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-tls\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622146 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-confd\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622265 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d36723-6a61-470a-9107-e5e8cf1c49a0-pod-info\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622365 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-plugins-conf\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622433 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-server-conf\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622497 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-erlang-cookie\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622626 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d36723-6a61-470a-9107-e5e8cf1c49a0-erlang-cookie-secret\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622795 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lxcj\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-kube-api-access-8lxcj\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.622879 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-plugins\") pod \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\" (UID: \"e4d36723-6a61-470a-9107-e5e8cf1c49a0\") " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.623960 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.629377 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.631025 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kvptb" podStartSLOduration=7.062794834 podStartE2EDuration="10.63100326s" podCreationTimestamp="2026-01-23 18:31:49 +0000 UTC" firstStartedPulling="2026-01-23 18:31:51.256208254 +0000 UTC m=+1506.252032695" lastFinishedPulling="2026-01-23 18:31:54.82441668 +0000 UTC m=+1509.820241121" observedRunningTime="2026-01-23 18:31:55.330755141 +0000 UTC m=+1510.326579592" watchObservedRunningTime="2026-01-23 18:31:59.63100326 +0000 UTC m=+1514.626827701" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.633081 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.662377 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e4d36723-6a61-470a-9107-e5e8cf1c49a0-pod-info" (OuterVolumeSpecName: "pod-info") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.673290 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.687102 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.687122 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d36723-6a61-470a-9107-e5e8cf1c49a0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.687922 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-kube-api-access-8lxcj" (OuterVolumeSpecName: "kube-api-access-8lxcj") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "kube-api-access-8lxcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.688436 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-config-data" (OuterVolumeSpecName: "config-data") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728723 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lxcj\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-kube-api-access-8lxcj\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728792 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728805 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728858 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728873 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728884 4688 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e4d36723-6a61-470a-9107-e5e8cf1c49a0-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728898 4688 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728930 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.728944 4688 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e4d36723-6a61-470a-9107-e5e8cf1c49a0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.841935 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.871693 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-server-conf" (OuterVolumeSpecName: "server-conf") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.902695 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e4d36723-6a61-470a-9107-e5e8cf1c49a0" (UID: "e4d36723-6a61-470a-9107-e5e8cf1c49a0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.935157 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.935262 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e4d36723-6a61-470a-9107-e5e8cf1c49a0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 18:31:59 crc kubenswrapper[4688]: I0123 18:31:59.935278 4688 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e4d36723-6a61-470a-9107-e5e8cf1c49a0-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.364343 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e4d36723-6a61-470a-9107-e5e8cf1c49a0","Type":"ContainerDied","Data":"c74eae847a88311c1e84d30e458620da81dd8c5b868b70cb395d8d60e36f5e78"} Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.364411 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.364439 4688 scope.go:117] "RemoveContainer" containerID="2c3e96b5f5164328bdb03f50cbc2bbda53492fe97ee3c38f936389940e89ec51" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.404577 4688 scope.go:117] "RemoveContainer" containerID="452d44893c7bbd93eddc82ee7c1bbc84b3793989e71172184890ef83a205acd3" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.429452 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.448959 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.462820 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.484498 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:32:00 crc kubenswrapper[4688]: E0123 18:32:00.497482 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerName="setup-container" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.497510 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerName="setup-container" Jan 23 18:32:00 crc kubenswrapper[4688]: E0123 18:32:00.497540 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerName="rabbitmq" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.497550 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerName="rabbitmq" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.497855 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" containerName="rabbitmq" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.499551 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.504034 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.504406 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gr9f8" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.504438 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.504505 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.504656 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.504845 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.504993 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.515132 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.546143 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvptb"] Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.659389 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.659455 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.659492 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.659597 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.659663 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.660045 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.660119 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-config-data\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.660154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24glh\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-kube-api-access-24glh\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.660235 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.660268 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.660346 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.762353 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.762431 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.763053 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.762509 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.763396 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.763434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.767899 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.768085 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.770431 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.775337 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-config-data\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.775390 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24glh\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-kube-api-access-24glh\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.775504 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.775552 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.775667 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.775803 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.776375 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-config-data\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.776400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.782802 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.783647 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.786684 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.789978 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.800628 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24glh\" (UniqueName: \"kubernetes.io/projected/9829e8b2-ebbc-4326-8a8d-2ceef863a9db-kube-api-access-24glh\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:00 crc kubenswrapper[4688]: I0123 18:32:00.830373 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9829e8b2-ebbc-4326-8a8d-2ceef863a9db\") " pod="openstack/rabbitmq-server-0" Jan 23 18:32:01 crc kubenswrapper[4688]: I0123 18:32:01.130968 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 18:32:01 crc kubenswrapper[4688]: I0123 18:32:01.373252 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d36723-6a61-470a-9107-e5e8cf1c49a0" path="/var/lib/kubelet/pods/e4d36723-6a61-470a-9107-e5e8cf1c49a0/volumes" Jan 23 18:32:01 crc kubenswrapper[4688]: I0123 18:32:01.416242 4688 generic.go:334] "Generic (PLEG): container finished" podID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerID="2f30879cdd11b516d8167680b37abb43efca558cb0f015fe16164231564e96ef" exitCode=0 Jan 23 18:32:01 crc kubenswrapper[4688]: I0123 18:32:01.416323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5bf89cbd-9a52-45b0-8e35-1e070a678aea","Type":"ContainerDied","Data":"2f30879cdd11b516d8167680b37abb43efca558cb0f015fe16164231564e96ef"} Jan 23 18:32:01 crc kubenswrapper[4688]: I0123 18:32:01.737025 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 18:32:01 crc kubenswrapper[4688]: I0123 18:32:01.979144 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.119538 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-server-conf\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.119920 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bf89cbd-9a52-45b0-8e35-1e070a678aea-pod-info\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.119952 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-config-data\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.120049 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-confd\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.120137 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-erlang-cookie\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.120164 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-plugins-conf\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.121262 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-tls\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.121459 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-plugins\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.121535 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bf89cbd-9a52-45b0-8e35-1e070a678aea-erlang-cookie-secret\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.121584 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.121616 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgrxt\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-kube-api-access-sgrxt\") pod \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\" (UID: \"5bf89cbd-9a52-45b0-8e35-1e070a678aea\") " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.122553 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.123225 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.123583 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.126132 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf89cbd-9a52-45b0-8e35-1e070a678aea-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.129446 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.142308 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.144552 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-kube-api-access-sgrxt" (OuterVolumeSpecName: "kube-api-access-sgrxt") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "kube-api-access-sgrxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.144710 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.145599 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5bf89cbd-9a52-45b0-8e35-1e070a678aea-pod-info" (OuterVolumeSpecName: "pod-info") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.191141 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-config-data" (OuterVolumeSpecName: "config-data") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.191611 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-server-conf" (OuterVolumeSpecName: "server-conf") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225006 4688 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225276 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225348 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225402 4688 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5bf89cbd-9a52-45b0-8e35-1e070a678aea-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225495 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225562 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgrxt\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-kube-api-access-sgrxt\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225632 4688 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225693 4688 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5bf89cbd-9a52-45b0-8e35-1e070a678aea-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.225742 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5bf89cbd-9a52-45b0-8e35-1e070a678aea-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.248796 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.256068 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5bf89cbd-9a52-45b0-8e35-1e070a678aea" (UID: "5bf89cbd-9a52-45b0-8e35-1e070a678aea"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.327715 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.327757 4688 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5bf89cbd-9a52-45b0-8e35-1e070a678aea-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.411082 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-qpt95"] Jan 23 18:32:02 crc kubenswrapper[4688]: E0123 18:32:02.411706 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerName="rabbitmq" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.411724 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerName="rabbitmq" Jan 23 18:32:02 crc kubenswrapper[4688]: E0123 18:32:02.411752 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerName="setup-container" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.411760 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerName="setup-container" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.412109 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" containerName="rabbitmq" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.413569 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.415744 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.433245 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5bf89cbd-9a52-45b0-8e35-1e070a678aea","Type":"ContainerDied","Data":"e5c5975039225e5a0bb9d22ed76aae1f72757e66485cc620bd433cc44f2fdd9e"} Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.433320 4688 scope.go:117] "RemoveContainer" containerID="2f30879cdd11b516d8167680b37abb43efca558cb0f015fe16164231564e96ef" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.433270 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.436708 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kvptb" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="registry-server" containerID="cri-o://f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff" gracePeriod=2 Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.438788 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9829e8b2-ebbc-4326-8a8d-2ceef863a9db","Type":"ContainerStarted","Data":"9be8f2f269bc8e4b140399730ebb268034d2fdbf0e1f56ff053398d542b835a2"} Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.440534 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-qpt95"] Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.466738 4688 scope.go:117] "RemoveContainer" containerID="c204bcdba9565296476b3294dc89caf2f775ae30d177f4c16ab8aff9f9b3c995" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.495266 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.505536 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.532114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-config\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.532179 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.532238 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.532332 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.532421 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.532538 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.532646 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvt8v\" (UniqueName: \"kubernetes.io/projected/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-kube-api-access-hvt8v\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.538884 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.540996 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.558384 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.558790 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.559113 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.559793 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.560001 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7nsc9" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.560017 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.560015 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.560090 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.635587 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.635948 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636004 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvt8v\" (UniqueName: \"kubernetes.io/projected/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-kube-api-access-hvt8v\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636031 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29a2e74d-781b-4d79-ae54-7a37c75adee5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636096 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-config\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636131 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636169 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636412 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636506 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636550 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636627 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636649 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636678 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636715 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636738 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29a2e74d-781b-4d79-ae54-7a37c75adee5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636757 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636784 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hss6k\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-kube-api-access-hss6k\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636842 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636867 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.636937 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.637048 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-config\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.637807 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.637899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.637903 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.665500 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvt8v\" (UniqueName: \"kubernetes.io/projected/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-kube-api-access-hvt8v\") pod \"dnsmasq-dns-79bd4cc8c9-qpt95\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.730628 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.740790 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.740845 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.740886 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.740915 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.740948 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.740968 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29a2e74d-781b-4d79-ae54-7a37c75adee5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.740991 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.741010 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hss6k\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-kube-api-access-hss6k\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.741046 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.741124 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.741171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29a2e74d-781b-4d79-ae54-7a37c75adee5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.741933 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.742894 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.743730 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.743892 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.744096 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.746875 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/29a2e74d-781b-4d79-ae54-7a37c75adee5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.750264 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.770108 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/29a2e74d-781b-4d79-ae54-7a37c75adee5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.771468 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hss6k\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-kube-api-access-hss6k\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.773166 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/29a2e74d-781b-4d79-ae54-7a37c75adee5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.775119 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/29a2e74d-781b-4d79-ae54-7a37c75adee5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:02 crc kubenswrapper[4688]: I0123 18:32:02.951770 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.009705 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"29a2e74d-781b-4d79-ae54-7a37c75adee5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.051330 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwdds\" (UniqueName: \"kubernetes.io/projected/45908950-31bd-40fa-a99e-531e4b867ab0-kube-api-access-cwdds\") pod \"45908950-31bd-40fa-a99e-531e4b867ab0\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.051389 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-utilities\") pod \"45908950-31bd-40fa-a99e-531e4b867ab0\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.051561 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-catalog-content\") pod \"45908950-31bd-40fa-a99e-531e4b867ab0\" (UID: \"45908950-31bd-40fa-a99e-531e4b867ab0\") " Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.052656 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-utilities" (OuterVolumeSpecName: "utilities") pod "45908950-31bd-40fa-a99e-531e4b867ab0" (UID: "45908950-31bd-40fa-a99e-531e4b867ab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.056893 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45908950-31bd-40fa-a99e-531e4b867ab0-kube-api-access-cwdds" (OuterVolumeSpecName: "kube-api-access-cwdds") pod "45908950-31bd-40fa-a99e-531e4b867ab0" (UID: "45908950-31bd-40fa-a99e-531e4b867ab0"). InnerVolumeSpecName "kube-api-access-cwdds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.122236 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45908950-31bd-40fa-a99e-531e4b867ab0" (UID: "45908950-31bd-40fa-a99e-531e4b867ab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.154876 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.154921 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45908950-31bd-40fa-a99e-531e4b867ab0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.154933 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwdds\" (UniqueName: \"kubernetes.io/projected/45908950-31bd-40fa-a99e-531e4b867ab0-kube-api-access-cwdds\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.260024 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.271964 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-qpt95"] Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.377242 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf89cbd-9a52-45b0-8e35-1e070a678aea" path="/var/lib/kubelet/pods/5bf89cbd-9a52-45b0-8e35-1e070a678aea/volumes" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.460413 4688 generic.go:334] "Generic (PLEG): container finished" podID="45908950-31bd-40fa-a99e-531e4b867ab0" containerID="f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff" exitCode=0 Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.460495 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvptb" event={"ID":"45908950-31bd-40fa-a99e-531e4b867ab0","Type":"ContainerDied","Data":"f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff"} Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.460536 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvptb" event={"ID":"45908950-31bd-40fa-a99e-531e4b867ab0","Type":"ContainerDied","Data":"c3807ff9c8d68dd3ab1369647057b4793bd065a7a0121314a527c3656157b844"} Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.460559 4688 scope.go:117] "RemoveContainer" containerID="f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.460750 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvptb" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.473320 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" event={"ID":"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f","Type":"ContainerStarted","Data":"ceb307551f63849c6bd2d713b0f40f726361cb4696d5e8aaf6c520ffc0e006fc"} Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.664950 4688 scope.go:117] "RemoveContainer" containerID="b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.703650 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvptb"] Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.711569 4688 scope.go:117] "RemoveContainer" containerID="d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.719490 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kvptb"] Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.752708 4688 scope.go:117] "RemoveContainer" containerID="f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff" Jan 23 18:32:03 crc kubenswrapper[4688]: E0123 18:32:03.753666 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff\": container with ID starting with f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff not found: ID does not exist" containerID="f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.753723 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff"} err="failed to get container status \"f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff\": rpc error: code = NotFound desc = could not find container \"f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff\": container with ID starting with f3e64f2caee8cf7ac53105d5fe5fc09197dd3ca137180d749ed657af229cb0ff not found: ID does not exist" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.753768 4688 scope.go:117] "RemoveContainer" containerID="b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849" Jan 23 18:32:03 crc kubenswrapper[4688]: E0123 18:32:03.754156 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849\": container with ID starting with b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849 not found: ID does not exist" containerID="b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.754255 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849"} err="failed to get container status \"b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849\": rpc error: code = NotFound desc = could not find container \"b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849\": container with ID starting with b39a5c2e8dd336be886e195962db0c520d95905c0c01cccc17ba58f1334ed849 not found: ID does not exist" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.754300 4688 scope.go:117] "RemoveContainer" containerID="d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91" Jan 23 18:32:03 crc kubenswrapper[4688]: E0123 18:32:03.754574 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91\": container with ID starting with d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91 not found: ID does not exist" containerID="d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.754609 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91"} err="failed to get container status \"d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91\": rpc error: code = NotFound desc = could not find container \"d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91\": container with ID starting with d0779b86a55fcacb7909b03f788008e1ab5c6179dcd7c382dc746bebf7e9ac91 not found: ID does not exist" Jan 23 18:32:03 crc kubenswrapper[4688]: I0123 18:32:03.871119 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 18:32:03 crc kubenswrapper[4688]: W0123 18:32:03.882617 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29a2e74d_781b_4d79_ae54_7a37c75adee5.slice/crio-34f13a8fcac65d9508debc3a3e368466125eacb58d0365f6d37014dda1953cbc WatchSource:0}: Error finding container 34f13a8fcac65d9508debc3a3e368466125eacb58d0365f6d37014dda1953cbc: Status 404 returned error can't find the container with id 34f13a8fcac65d9508debc3a3e368466125eacb58d0365f6d37014dda1953cbc Jan 23 18:32:04 crc kubenswrapper[4688]: I0123 18:32:04.485196 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9829e8b2-ebbc-4326-8a8d-2ceef863a9db","Type":"ContainerStarted","Data":"e1d30fc71f2df09351bf0978c34eecdc306fbb0b8243d7eaed37807aa3906ffd"} Jan 23 18:32:04 crc kubenswrapper[4688]: I0123 18:32:04.493580 4688 generic.go:334] "Generic (PLEG): container finished" podID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerID="dd80fd955e66c962d6f2454131860dddc77181fab7080ac078df6ce1184d2731" exitCode=0 Jan 23 18:32:04 crc kubenswrapper[4688]: I0123 18:32:04.493684 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" event={"ID":"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f","Type":"ContainerDied","Data":"dd80fd955e66c962d6f2454131860dddc77181fab7080ac078df6ce1184d2731"} Jan 23 18:32:04 crc kubenswrapper[4688]: I0123 18:32:04.504357 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"29a2e74d-781b-4d79-ae54-7a37c75adee5","Type":"ContainerStarted","Data":"34f13a8fcac65d9508debc3a3e368466125eacb58d0365f6d37014dda1953cbc"} Jan 23 18:32:05 crc kubenswrapper[4688]: I0123 18:32:05.371175 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" path="/var/lib/kubelet/pods/45908950-31bd-40fa-a99e-531e4b867ab0/volumes" Jan 23 18:32:05 crc kubenswrapper[4688]: I0123 18:32:05.519874 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"29a2e74d-781b-4d79-ae54-7a37c75adee5","Type":"ContainerStarted","Data":"a84ff874341e547e714c26025563d946a6204b68a25882064c507c476e8f20c3"} Jan 23 18:32:05 crc kubenswrapper[4688]: I0123 18:32:05.521909 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" event={"ID":"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f","Type":"ContainerStarted","Data":"5fa9f81d77550f4da847c9be9114ffeb38562bf74decf27f25c46a077a111cb1"} Jan 23 18:32:05 crc kubenswrapper[4688]: I0123 18:32:05.544986 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" podStartSLOduration=3.544960575 podStartE2EDuration="3.544960575s" podCreationTimestamp="2026-01-23 18:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:05.537042324 +0000 UTC m=+1520.532866765" watchObservedRunningTime="2026-01-23 18:32:05.544960575 +0000 UTC m=+1520.540785016" Jan 23 18:32:06 crc kubenswrapper[4688]: I0123 18:32:06.530590 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.732469 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.809238 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rk52f"] Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.809504 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" podUID="1db3efae-8276-4970-9593-b92065efdc42" containerName="dnsmasq-dns" containerID="cri-o://f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7" gracePeriod=10 Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.976131 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-8gcp9"] Jan 23 18:32:12 crc kubenswrapper[4688]: E0123 18:32:12.976944 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="registry-server" Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.976967 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="registry-server" Jan 23 18:32:12 crc kubenswrapper[4688]: E0123 18:32:12.977031 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="extract-utilities" Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.977040 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="extract-utilities" Jan 23 18:32:12 crc kubenswrapper[4688]: E0123 18:32:12.977059 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="extract-content" Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.977067 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="extract-content" Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.977396 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="45908950-31bd-40fa-a99e-531e4b867ab0" containerName="registry-server" Jan 23 18:32:12 crc kubenswrapper[4688]: I0123 18:32:12.979012 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.022045 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-8gcp9"] Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.062485 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv4j8\" (UniqueName: \"kubernetes.io/projected/304eee98-817f-482f-88a4-0390cfa06ffc-kube-api-access-vv4j8\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.062676 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.062756 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.062795 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.062845 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.062893 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-config\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.063085 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.178726 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.178824 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.178866 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.178925 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.178979 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-config\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.179034 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.179076 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv4j8\" (UniqueName: \"kubernetes.io/projected/304eee98-817f-482f-88a4-0390cfa06ffc-kube-api-access-vv4j8\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.180409 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-dns-svc\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.180782 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-ovsdbserver-sb\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.180982 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-config\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.181505 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-dns-swift-storage-0\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.210404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-openstack-edpm-ipam\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.215818 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304eee98-817f-482f-88a4-0390cfa06ffc-ovsdbserver-nb\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.219386 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv4j8\" (UniqueName: \"kubernetes.io/projected/304eee98-817f-482f-88a4-0390cfa06ffc-kube-api-access-vv4j8\") pod \"dnsmasq-dns-f4d4c4b7-8gcp9\" (UID: \"304eee98-817f-482f-88a4-0390cfa06ffc\") " pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.339514 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.500778 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.674543 4688 generic.go:334] "Generic (PLEG): container finished" podID="1db3efae-8276-4970-9593-b92065efdc42" containerID="f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7" exitCode=0 Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.674602 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" event={"ID":"1db3efae-8276-4970-9593-b92065efdc42","Type":"ContainerDied","Data":"f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7"} Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.674639 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" event={"ID":"1db3efae-8276-4970-9593-b92065efdc42","Type":"ContainerDied","Data":"3a56f00984fb28ac8f4056cc097f38478a1c1383a158d302266cdf200c75db08"} Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.674656 4688 scope.go:117] "RemoveContainer" containerID="f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.674803 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-rk52f" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.688996 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g7j4\" (UniqueName: \"kubernetes.io/projected/1db3efae-8276-4970-9593-b92065efdc42-kube-api-access-8g7j4\") pod \"1db3efae-8276-4970-9593-b92065efdc42\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.689145 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-nb\") pod \"1db3efae-8276-4970-9593-b92065efdc42\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.689212 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-config\") pod \"1db3efae-8276-4970-9593-b92065efdc42\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.689300 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-swift-storage-0\") pod \"1db3efae-8276-4970-9593-b92065efdc42\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.689384 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-svc\") pod \"1db3efae-8276-4970-9593-b92065efdc42\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.689463 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-sb\") pod \"1db3efae-8276-4970-9593-b92065efdc42\" (UID: \"1db3efae-8276-4970-9593-b92065efdc42\") " Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.701130 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db3efae-8276-4970-9593-b92065efdc42-kube-api-access-8g7j4" (OuterVolumeSpecName: "kube-api-access-8g7j4") pod "1db3efae-8276-4970-9593-b92065efdc42" (UID: "1db3efae-8276-4970-9593-b92065efdc42"). InnerVolumeSpecName "kube-api-access-8g7j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.705669 4688 scope.go:117] "RemoveContainer" containerID="e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.744853 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-config" (OuterVolumeSpecName: "config") pod "1db3efae-8276-4970-9593-b92065efdc42" (UID: "1db3efae-8276-4970-9593-b92065efdc42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.749898 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1db3efae-8276-4970-9593-b92065efdc42" (UID: "1db3efae-8276-4970-9593-b92065efdc42"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.751068 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1db3efae-8276-4970-9593-b92065efdc42" (UID: "1db3efae-8276-4970-9593-b92065efdc42"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.761518 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1db3efae-8276-4970-9593-b92065efdc42" (UID: "1db3efae-8276-4970-9593-b92065efdc42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.766554 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1db3efae-8276-4970-9593-b92065efdc42" (UID: "1db3efae-8276-4970-9593-b92065efdc42"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.781275 4688 scope.go:117] "RemoveContainer" containerID="f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7" Jan 23 18:32:13 crc kubenswrapper[4688]: E0123 18:32:13.781825 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7\": container with ID starting with f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7 not found: ID does not exist" containerID="f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.781865 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7"} err="failed to get container status \"f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7\": rpc error: code = NotFound desc = could not find container \"f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7\": container with ID starting with f61df9579137ccb599117aa4156da2790639afe4dbe661e5770730cae8f702b7 not found: ID does not exist" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.781961 4688 scope.go:117] "RemoveContainer" containerID="e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54" Jan 23 18:32:13 crc kubenswrapper[4688]: E0123 18:32:13.782244 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54\": container with ID starting with e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54 not found: ID does not exist" containerID="e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.782292 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54"} err="failed to get container status \"e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54\": rpc error: code = NotFound desc = could not find container \"e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54\": container with ID starting with e3b7ab37939acceef310860cfd11759707f9c9da9d7bb5983c049998e9cd7e54 not found: ID does not exist" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.792956 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g7j4\" (UniqueName: \"kubernetes.io/projected/1db3efae-8276-4970-9593-b92065efdc42-kube-api-access-8g7j4\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.792995 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.793007 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.793020 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.793032 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.793046 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db3efae-8276-4970-9593-b92065efdc42-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:13 crc kubenswrapper[4688]: I0123 18:32:13.849065 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4d4c4b7-8gcp9"] Jan 23 18:32:14 crc kubenswrapper[4688]: I0123 18:32:14.061938 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rk52f"] Jan 23 18:32:14 crc kubenswrapper[4688]: I0123 18:32:14.072540 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-rk52f"] Jan 23 18:32:14 crc kubenswrapper[4688]: I0123 18:32:14.688483 4688 generic.go:334] "Generic (PLEG): container finished" podID="304eee98-817f-482f-88a4-0390cfa06ffc" containerID="5487515983db8afa182a21479a318ff4ada93daa0f6fab8a6c58628ca1148e30" exitCode=0 Jan 23 18:32:14 crc kubenswrapper[4688]: I0123 18:32:14.688534 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" event={"ID":"304eee98-817f-482f-88a4-0390cfa06ffc","Type":"ContainerDied","Data":"5487515983db8afa182a21479a318ff4ada93daa0f6fab8a6c58628ca1148e30"} Jan 23 18:32:14 crc kubenswrapper[4688]: I0123 18:32:14.688860 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" event={"ID":"304eee98-817f-482f-88a4-0390cfa06ffc","Type":"ContainerStarted","Data":"1a0d945ad21fcefbe45b5a0c200198058f869cd5cff2a33f42e847cb84098f90"} Jan 23 18:32:15 crc kubenswrapper[4688]: I0123 18:32:15.370501 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db3efae-8276-4970-9593-b92065efdc42" path="/var/lib/kubelet/pods/1db3efae-8276-4970-9593-b92065efdc42/volumes" Jan 23 18:32:15 crc kubenswrapper[4688]: I0123 18:32:15.709900 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" event={"ID":"304eee98-817f-482f-88a4-0390cfa06ffc","Type":"ContainerStarted","Data":"23056bcc3f3102130ae444df61e186d9ec3737d5eaf1edb73d6c91ee95876298"} Jan 23 18:32:15 crc kubenswrapper[4688]: I0123 18:32:15.710272 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:15 crc kubenswrapper[4688]: I0123 18:32:15.733356 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" podStartSLOduration=3.733336396 podStartE2EDuration="3.733336396s" podCreationTimestamp="2026-01-23 18:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:15.732668187 +0000 UTC m=+1530.728492638" watchObservedRunningTime="2026-01-23 18:32:15.733336396 +0000 UTC m=+1530.729160837" Jan 23 18:32:23 crc kubenswrapper[4688]: I0123 18:32:23.342461 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4d4c4b7-8gcp9" Jan 23 18:32:23 crc kubenswrapper[4688]: I0123 18:32:23.416388 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-qpt95"] Jan 23 18:32:23 crc kubenswrapper[4688]: I0123 18:32:23.416856 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" podUID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerName="dnsmasq-dns" containerID="cri-o://5fa9f81d77550f4da847c9be9114ffeb38562bf74decf27f25c46a077a111cb1" gracePeriod=10 Jan 23 18:32:23 crc kubenswrapper[4688]: I0123 18:32:23.812574 4688 generic.go:334] "Generic (PLEG): container finished" podID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerID="5fa9f81d77550f4da847c9be9114ffeb38562bf74decf27f25c46a077a111cb1" exitCode=0 Jan 23 18:32:23 crc kubenswrapper[4688]: I0123 18:32:23.812734 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" event={"ID":"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f","Type":"ContainerDied","Data":"5fa9f81d77550f4da847c9be9114ffeb38562bf74decf27f25c46a077a111cb1"} Jan 23 18:32:23 crc kubenswrapper[4688]: I0123 18:32:23.993711 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.052820 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-sb\") pod \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.053101 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-swift-storage-0\") pod \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.053156 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-svc\") pod \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.053257 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-config\") pod \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.053298 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-nb\") pod \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.053399 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvt8v\" (UniqueName: \"kubernetes.io/projected/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-kube-api-access-hvt8v\") pod \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.053462 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-openstack-edpm-ipam\") pod \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\" (UID: \"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f\") " Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.062180 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-kube-api-access-hvt8v" (OuterVolumeSpecName: "kube-api-access-hvt8v") pod "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" (UID: "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f"). InnerVolumeSpecName "kube-api-access-hvt8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.126757 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" (UID: "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.128307 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" (UID: "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.136207 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-config" (OuterVolumeSpecName: "config") pod "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" (UID: "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.145322 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" (UID: "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.151899 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" (UID: "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.155953 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvt8v\" (UniqueName: \"kubernetes.io/projected/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-kube-api-access-hvt8v\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.155993 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.156005 4688 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.156015 4688 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.156025 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.156033 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.167634 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" (UID: "3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.258501 4688 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.825076 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" event={"ID":"3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f","Type":"ContainerDied","Data":"ceb307551f63849c6bd2d713b0f40f726361cb4696d5e8aaf6c520ffc0e006fc"} Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.825139 4688 scope.go:117] "RemoveContainer" containerID="5fa9f81d77550f4da847c9be9114ffeb38562bf74decf27f25c46a077a111cb1" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.825144 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-qpt95" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.861378 4688 scope.go:117] "RemoveContainer" containerID="dd80fd955e66c962d6f2454131860dddc77181fab7080ac078df6ce1184d2731" Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.862361 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-qpt95"] Jan 23 18:32:24 crc kubenswrapper[4688]: I0123 18:32:24.873557 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-qpt95"] Jan 23 18:32:25 crc kubenswrapper[4688]: I0123 18:32:25.371585 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" path="/var/lib/kubelet/pods/3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f/volumes" Jan 23 18:32:33 crc kubenswrapper[4688]: I0123 18:32:33.641169 4688 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod45908950-31bd-40fa-a99e-531e4b867ab0"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod45908950-31bd-40fa-a99e-531e4b867ab0] : Timed out while waiting for systemd to remove kubepods-burstable-pod45908950_31bd_40fa_a99e_531e4b867ab0.slice" Jan 23 18:32:35 crc kubenswrapper[4688]: E0123 18:32:35.859349 4688 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9829e8b2_ebbc_4326_8a8d_2ceef863a9db.slice/crio-e1d30fc71f2df09351bf0978c34eecdc306fbb0b8243d7eaed37807aa3906ffd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9829e8b2_ebbc_4326_8a8d_2ceef863a9db.slice/crio-conmon-e1d30fc71f2df09351bf0978c34eecdc306fbb0b8243d7eaed37807aa3906ffd.scope\": RecentStats: unable to find data in memory cache]" Jan 23 18:32:35 crc kubenswrapper[4688]: I0123 18:32:35.955208 4688 generic.go:334] "Generic (PLEG): container finished" podID="9829e8b2-ebbc-4326-8a8d-2ceef863a9db" containerID="e1d30fc71f2df09351bf0978c34eecdc306fbb0b8243d7eaed37807aa3906ffd" exitCode=0 Jan 23 18:32:35 crc kubenswrapper[4688]: I0123 18:32:35.955257 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9829e8b2-ebbc-4326-8a8d-2ceef863a9db","Type":"ContainerDied","Data":"e1d30fc71f2df09351bf0978c34eecdc306fbb0b8243d7eaed37807aa3906ffd"} Jan 23 18:32:36 crc kubenswrapper[4688]: I0123 18:32:36.965773 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9829e8b2-ebbc-4326-8a8d-2ceef863a9db","Type":"ContainerStarted","Data":"cee8d1a9550296371aac26f4946b9cca15ade2fdfbf8f537393242907ffaca02"} Jan 23 18:32:36 crc kubenswrapper[4688]: I0123 18:32:36.966233 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 18:32:37 crc kubenswrapper[4688]: I0123 18:32:37.988724 4688 generic.go:334] "Generic (PLEG): container finished" podID="29a2e74d-781b-4d79-ae54-7a37c75adee5" containerID="a84ff874341e547e714c26025563d946a6204b68a25882064c507c476e8f20c3" exitCode=0 Jan 23 18:32:37 crc kubenswrapper[4688]: I0123 18:32:37.988894 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"29a2e74d-781b-4d79-ae54-7a37c75adee5","Type":"ContainerDied","Data":"a84ff874341e547e714c26025563d946a6204b68a25882064c507c476e8f20c3"} Jan 23 18:32:38 crc kubenswrapper[4688]: I0123 18:32:38.027489 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.027469008 podStartE2EDuration="38.027469008s" podCreationTimestamp="2026-01-23 18:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:36.997709197 +0000 UTC m=+1551.993533648" watchObservedRunningTime="2026-01-23 18:32:38.027469008 +0000 UTC m=+1553.023293449" Jan 23 18:32:39 crc kubenswrapper[4688]: I0123 18:32:39.003592 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"29a2e74d-781b-4d79-ae54-7a37c75adee5","Type":"ContainerStarted","Data":"8a4f7c1910b3271cbb5231686d1db5c43b6ad7fa6ba6b260c22b60763bef6bf1"} Jan 23 18:32:39 crc kubenswrapper[4688]: I0123 18:32:39.004095 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:39 crc kubenswrapper[4688]: I0123 18:32:39.032326 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.03230152 podStartE2EDuration="37.03230152s" podCreationTimestamp="2026-01-23 18:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:32:39.026486837 +0000 UTC m=+1554.022311288" watchObservedRunningTime="2026-01-23 18:32:39.03230152 +0000 UTC m=+1554.028125961" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.677135 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk"] Jan 23 18:32:41 crc kubenswrapper[4688]: E0123 18:32:41.678487 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db3efae-8276-4970-9593-b92065efdc42" containerName="dnsmasq-dns" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.678506 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db3efae-8276-4970-9593-b92065efdc42" containerName="dnsmasq-dns" Jan 23 18:32:41 crc kubenswrapper[4688]: E0123 18:32:41.678537 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db3efae-8276-4970-9593-b92065efdc42" containerName="init" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.678543 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db3efae-8276-4970-9593-b92065efdc42" containerName="init" Jan 23 18:32:41 crc kubenswrapper[4688]: E0123 18:32:41.678559 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerName="dnsmasq-dns" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.678566 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerName="dnsmasq-dns" Jan 23 18:32:41 crc kubenswrapper[4688]: E0123 18:32:41.678591 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerName="init" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.678599 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerName="init" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.678901 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db3efae-8276-4970-9593-b92065efdc42" containerName="dnsmasq-dns" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.678938 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="3416881e-1bb0-4c5e-b1fe-ee8eb54e2d2f" containerName="dnsmasq-dns" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.679954 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.682861 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.683771 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.684118 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.694768 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk"] Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.703067 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.794999 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.795518 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.795695 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t27t\" (UniqueName: \"kubernetes.io/projected/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-kube-api-access-8t27t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.795912 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.898069 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.898315 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.898394 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.898454 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t27t\" (UniqueName: \"kubernetes.io/projected/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-kube-api-access-8t27t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.907146 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.907279 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.907666 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:41 crc kubenswrapper[4688]: I0123 18:32:41.923897 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t27t\" (UniqueName: \"kubernetes.io/projected/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-kube-api-access-8t27t\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:42 crc kubenswrapper[4688]: I0123 18:32:42.007953 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:32:42 crc kubenswrapper[4688]: I0123 18:32:42.661570 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:32:42 crc kubenswrapper[4688]: I0123 18:32:42.670362 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk"] Jan 23 18:32:43 crc kubenswrapper[4688]: I0123 18:32:43.056079 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" event={"ID":"d81fb34b-f44c-413e-af3a-2b6ed6f82fed","Type":"ContainerStarted","Data":"b18f4d73325ba7c50e7452101d3115a26ef1d90f87b81c85c10625133d5349c3"} Jan 23 18:32:51 crc kubenswrapper[4688]: I0123 18:32:51.135591 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 18:32:53 crc kubenswrapper[4688]: I0123 18:32:53.259156 4688 scope.go:117] "RemoveContainer" containerID="86fd0dfdc243c7e96f43d02c85738d6306b1fb5ca0706cc203e73249995a5731" Jan 23 18:32:53 crc kubenswrapper[4688]: I0123 18:32:53.263889 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 18:32:53 crc kubenswrapper[4688]: I0123 18:32:53.874128 4688 scope.go:117] "RemoveContainer" containerID="b589c945ffaa251f6676c52288282d5b4bc90e25dc3ac88c99b948f829fbf8b9" Jan 23 18:32:53 crc kubenswrapper[4688]: I0123 18:32:53.940876 4688 scope.go:117] "RemoveContainer" containerID="5f77ad78e6807968354ea2c8e95205a19a403f6a80b3ec8d3ab42a3b5e57f882" Jan 23 18:32:54 crc kubenswrapper[4688]: I0123 18:32:54.479320 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:32:55 crc kubenswrapper[4688]: I0123 18:32:55.223447 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" event={"ID":"d81fb34b-f44c-413e-af3a-2b6ed6f82fed","Type":"ContainerStarted","Data":"a24018dab1553d886b6a8747cc4e3079aa36a07487d37fec0585ad5336fd7680"} Jan 23 18:32:55 crc kubenswrapper[4688]: I0123 18:32:55.254642 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" podStartSLOduration=2.439950727 podStartE2EDuration="14.254618447s" podCreationTimestamp="2026-01-23 18:32:41 +0000 UTC" firstStartedPulling="2026-01-23 18:32:42.661310009 +0000 UTC m=+1557.657134450" lastFinishedPulling="2026-01-23 18:32:54.475977729 +0000 UTC m=+1569.471802170" observedRunningTime="2026-01-23 18:32:55.241234852 +0000 UTC m=+1570.237059303" watchObservedRunningTime="2026-01-23 18:32:55.254618447 +0000 UTC m=+1570.250442888" Jan 23 18:33:07 crc kubenswrapper[4688]: I0123 18:33:07.523351 4688 generic.go:334] "Generic (PLEG): container finished" podID="d81fb34b-f44c-413e-af3a-2b6ed6f82fed" containerID="a24018dab1553d886b6a8747cc4e3079aa36a07487d37fec0585ad5336fd7680" exitCode=0 Jan 23 18:33:07 crc kubenswrapper[4688]: I0123 18:33:07.531552 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" event={"ID":"d81fb34b-f44c-413e-af3a-2b6ed6f82fed","Type":"ContainerDied","Data":"a24018dab1553d886b6a8747cc4e3079aa36a07487d37fec0585ad5336fd7680"} Jan 23 18:33:08 crc kubenswrapper[4688]: I0123 18:33:08.969812 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.218014 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-ssh-key-openstack-edpm-ipam\") pod \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.218094 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-inventory\") pod \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.218333 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t27t\" (UniqueName: \"kubernetes.io/projected/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-kube-api-access-8t27t\") pod \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.218408 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-repo-setup-combined-ca-bundle\") pod \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\" (UID: \"d81fb34b-f44c-413e-af3a-2b6ed6f82fed\") " Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.228417 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-kube-api-access-8t27t" (OuterVolumeSpecName: "kube-api-access-8t27t") pod "d81fb34b-f44c-413e-af3a-2b6ed6f82fed" (UID: "d81fb34b-f44c-413e-af3a-2b6ed6f82fed"). InnerVolumeSpecName "kube-api-access-8t27t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.231500 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d81fb34b-f44c-413e-af3a-2b6ed6f82fed" (UID: "d81fb34b-f44c-413e-af3a-2b6ed6f82fed"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.264484 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-inventory" (OuterVolumeSpecName: "inventory") pod "d81fb34b-f44c-413e-af3a-2b6ed6f82fed" (UID: "d81fb34b-f44c-413e-af3a-2b6ed6f82fed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.283916 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d81fb34b-f44c-413e-af3a-2b6ed6f82fed" (UID: "d81fb34b-f44c-413e-af3a-2b6ed6f82fed"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.321778 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t27t\" (UniqueName: \"kubernetes.io/projected/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-kube-api-access-8t27t\") on node \"crc\" DevicePath \"\"" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.322031 4688 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.322045 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.322057 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d81fb34b-f44c-413e-af3a-2b6ed6f82fed-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.562958 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" event={"ID":"d81fb34b-f44c-413e-af3a-2b6ed6f82fed","Type":"ContainerDied","Data":"b18f4d73325ba7c50e7452101d3115a26ef1d90f87b81c85c10625133d5349c3"} Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.563563 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b18f4d73325ba7c50e7452101d3115a26ef1d90f87b81c85c10625133d5349c3" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.563046 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.649934 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b"] Jan 23 18:33:09 crc kubenswrapper[4688]: E0123 18:33:09.650631 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81fb34b-f44c-413e-af3a-2b6ed6f82fed" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.650649 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81fb34b-f44c-413e-af3a-2b6ed6f82fed" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.650958 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81fb34b-f44c-413e-af3a-2b6ed6f82fed" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.651915 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.660266 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.660348 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.660456 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.660582 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.675583 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b"] Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.833610 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.833692 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.834282 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t5tx\" (UniqueName: \"kubernetes.io/projected/b11e8139-4a7d-4cda-8d54-0c88a360f046-kube-api-access-7t5tx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.936988 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t5tx\" (UniqueName: \"kubernetes.io/projected/b11e8139-4a7d-4cda-8d54-0c88a360f046-kube-api-access-7t5tx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.937199 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.937259 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.943303 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.956801 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.958881 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t5tx\" (UniqueName: \"kubernetes.io/projected/b11e8139-4a7d-4cda-8d54-0c88a360f046-kube-api-access-7t5tx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-ll67b\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:09 crc kubenswrapper[4688]: I0123 18:33:09.986870 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:10 crc kubenswrapper[4688]: W0123 18:33:10.537557 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb11e8139_4a7d_4cda_8d54_0c88a360f046.slice/crio-180a84a16a765f7663f101c63c1638523793c80e4557adbb8740f0e8d1460924 WatchSource:0}: Error finding container 180a84a16a765f7663f101c63c1638523793c80e4557adbb8740f0e8d1460924: Status 404 returned error can't find the container with id 180a84a16a765f7663f101c63c1638523793c80e4557adbb8740f0e8d1460924 Jan 23 18:33:10 crc kubenswrapper[4688]: I0123 18:33:10.541601 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b"] Jan 23 18:33:10 crc kubenswrapper[4688]: I0123 18:33:10.575631 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" event={"ID":"b11e8139-4a7d-4cda-8d54-0c88a360f046","Type":"ContainerStarted","Data":"180a84a16a765f7663f101c63c1638523793c80e4557adbb8740f0e8d1460924"} Jan 23 18:33:11 crc kubenswrapper[4688]: I0123 18:33:11.587565 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" event={"ID":"b11e8139-4a7d-4cda-8d54-0c88a360f046","Type":"ContainerStarted","Data":"321e77e3e3839ea5eb43bdcf2731bb9832be9987fffd1b240029ac8c42d83279"} Jan 23 18:33:11 crc kubenswrapper[4688]: I0123 18:33:11.614126 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" podStartSLOduration=2.186085557 podStartE2EDuration="2.614088054s" podCreationTimestamp="2026-01-23 18:33:09 +0000 UTC" firstStartedPulling="2026-01-23 18:33:10.542371948 +0000 UTC m=+1585.538196389" lastFinishedPulling="2026-01-23 18:33:10.970374445 +0000 UTC m=+1585.966198886" observedRunningTime="2026-01-23 18:33:11.605946336 +0000 UTC m=+1586.601770777" watchObservedRunningTime="2026-01-23 18:33:11.614088054 +0000 UTC m=+1586.609912495" Jan 23 18:33:14 crc kubenswrapper[4688]: I0123 18:33:14.618632 4688 generic.go:334] "Generic (PLEG): container finished" podID="b11e8139-4a7d-4cda-8d54-0c88a360f046" containerID="321e77e3e3839ea5eb43bdcf2731bb9832be9987fffd1b240029ac8c42d83279" exitCode=0 Jan 23 18:33:14 crc kubenswrapper[4688]: I0123 18:33:14.618806 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" event={"ID":"b11e8139-4a7d-4cda-8d54-0c88a360f046","Type":"ContainerDied","Data":"321e77e3e3839ea5eb43bdcf2731bb9832be9987fffd1b240029ac8c42d83279"} Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.061267 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.080602 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-ssh-key-openstack-edpm-ipam\") pod \"b11e8139-4a7d-4cda-8d54-0c88a360f046\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.080835 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-inventory\") pod \"b11e8139-4a7d-4cda-8d54-0c88a360f046\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.080905 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t5tx\" (UniqueName: \"kubernetes.io/projected/b11e8139-4a7d-4cda-8d54-0c88a360f046-kube-api-access-7t5tx\") pod \"b11e8139-4a7d-4cda-8d54-0c88a360f046\" (UID: \"b11e8139-4a7d-4cda-8d54-0c88a360f046\") " Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.099875 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11e8139-4a7d-4cda-8d54-0c88a360f046-kube-api-access-7t5tx" (OuterVolumeSpecName: "kube-api-access-7t5tx") pod "b11e8139-4a7d-4cda-8d54-0c88a360f046" (UID: "b11e8139-4a7d-4cda-8d54-0c88a360f046"). InnerVolumeSpecName "kube-api-access-7t5tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.116445 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-inventory" (OuterVolumeSpecName: "inventory") pod "b11e8139-4a7d-4cda-8d54-0c88a360f046" (UID: "b11e8139-4a7d-4cda-8d54-0c88a360f046"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.124520 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b11e8139-4a7d-4cda-8d54-0c88a360f046" (UID: "b11e8139-4a7d-4cda-8d54-0c88a360f046"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.184877 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.184941 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b11e8139-4a7d-4cda-8d54-0c88a360f046-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.184951 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t5tx\" (UniqueName: \"kubernetes.io/projected/b11e8139-4a7d-4cda-8d54-0c88a360f046-kube-api-access-7t5tx\") on node \"crc\" DevicePath \"\"" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.639368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" event={"ID":"b11e8139-4a7d-4cda-8d54-0c88a360f046","Type":"ContainerDied","Data":"180a84a16a765f7663f101c63c1638523793c80e4557adbb8740f0e8d1460924"} Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.639653 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="180a84a16a765f7663f101c63c1638523793c80e4557adbb8740f0e8d1460924" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.639800 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-ll67b" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.715617 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2"] Jan 23 18:33:16 crc kubenswrapper[4688]: E0123 18:33:16.716259 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11e8139-4a7d-4cda-8d54-0c88a360f046" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.716282 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11e8139-4a7d-4cda-8d54-0c88a360f046" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.716493 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11e8139-4a7d-4cda-8d54-0c88a360f046" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.717463 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.719500 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.719657 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.719782 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.721352 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.728063 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2"] Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.828931 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9rkb\" (UniqueName: \"kubernetes.io/projected/fcefed39-8bf9-4782-8262-6616eee522f6-kube-api-access-l9rkb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.829005 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.829276 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.829419 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.931746 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9rkb\" (UniqueName: \"kubernetes.io/projected/fcefed39-8bf9-4782-8262-6616eee522f6-kube-api-access-l9rkb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.932678 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.933362 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.933405 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.941211 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.941315 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.941482 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:16 crc kubenswrapper[4688]: I0123 18:33:16.948588 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9rkb\" (UniqueName: \"kubernetes.io/projected/fcefed39-8bf9-4782-8262-6616eee522f6-kube-api-access-l9rkb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:17 crc kubenswrapper[4688]: I0123 18:33:17.043107 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:33:17 crc kubenswrapper[4688]: I0123 18:33:17.649646 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2"] Jan 23 18:33:17 crc kubenswrapper[4688]: W0123 18:33:17.649843 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcefed39_8bf9_4782_8262_6616eee522f6.slice/crio-1154ff7582676aed091b228c18f08b5394024d18a65499e717c80367e5f5b418 WatchSource:0}: Error finding container 1154ff7582676aed091b228c18f08b5394024d18a65499e717c80367e5f5b418: Status 404 returned error can't find the container with id 1154ff7582676aed091b228c18f08b5394024d18a65499e717c80367e5f5b418 Jan 23 18:33:18 crc kubenswrapper[4688]: I0123 18:33:18.662549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" event={"ID":"fcefed39-8bf9-4782-8262-6616eee522f6","Type":"ContainerStarted","Data":"0284e68a5a931228c1e07ea4e3ea0ce55242ccbbc0fb367ee0e0739373b9374c"} Jan 23 18:33:18 crc kubenswrapper[4688]: I0123 18:33:18.664024 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" event={"ID":"fcefed39-8bf9-4782-8262-6616eee522f6","Type":"ContainerStarted","Data":"1154ff7582676aed091b228c18f08b5394024d18a65499e717c80367e5f5b418"} Jan 23 18:33:18 crc kubenswrapper[4688]: I0123 18:33:18.702090 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" podStartSLOduration=2.248556569 podStartE2EDuration="2.70206728s" podCreationTimestamp="2026-01-23 18:33:16 +0000 UTC" firstStartedPulling="2026-01-23 18:33:17.654449799 +0000 UTC m=+1592.650274240" lastFinishedPulling="2026-01-23 18:33:18.1079605 +0000 UTC m=+1593.103784951" observedRunningTime="2026-01-23 18:33:18.698691265 +0000 UTC m=+1593.694515706" watchObservedRunningTime="2026-01-23 18:33:18.70206728 +0000 UTC m=+1593.697891721" Jan 23 18:33:54 crc kubenswrapper[4688]: I0123 18:33:54.370668 4688 scope.go:117] "RemoveContainer" containerID="62ce1f46b085d1a812e5e3acd914ad43e5d2e2086f7695ec92bbe00cb3ba9c5d" Jan 23 18:33:54 crc kubenswrapper[4688]: I0123 18:33:54.555764 4688 scope.go:117] "RemoveContainer" containerID="67eddeec582d2097fd83ccf70d7b625bb5d777a4f0a668b075319de94028c377" Jan 23 18:33:54 crc kubenswrapper[4688]: I0123 18:33:54.588435 4688 scope.go:117] "RemoveContainer" containerID="10452553e627ad3a98a6ca4d955f1f3c9b427d8afde7af56ad4e6603f763a129" Jan 23 18:33:54 crc kubenswrapper[4688]: I0123 18:33:54.614496 4688 scope.go:117] "RemoveContainer" containerID="e4e12533f97b009396b78264b0386f9f0c7ebea268eacf6a4cd992fafe1c0b95" Jan 23 18:33:54 crc kubenswrapper[4688]: I0123 18:33:54.782226 4688 scope.go:117] "RemoveContainer" containerID="588d2b239a1b6626600028ec1b36b214f98f0d93d5e0cef36b02110021a836c2" Jan 23 18:34:06 crc kubenswrapper[4688]: I0123 18:34:06.965389 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:34:06 crc kubenswrapper[4688]: I0123 18:34:06.965967 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:34:36 crc kubenswrapper[4688]: I0123 18:34:36.964961 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:34:36 crc kubenswrapper[4688]: I0123 18:34:36.965630 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:34:54 crc kubenswrapper[4688]: I0123 18:34:54.898288 4688 scope.go:117] "RemoveContainer" containerID="f95bd7d962c5bfada63e3514a530a1139b422e0c58ae9d1e803f35f91a554f59" Jan 23 18:35:06 crc kubenswrapper[4688]: I0123 18:35:06.965719 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:35:06 crc kubenswrapper[4688]: I0123 18:35:06.966346 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:35:06 crc kubenswrapper[4688]: I0123 18:35:06.966423 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:35:06 crc kubenswrapper[4688]: I0123 18:35:06.967292 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:35:06 crc kubenswrapper[4688]: I0123 18:35:06.967386 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" gracePeriod=600 Jan 23 18:35:07 crc kubenswrapper[4688]: E0123 18:35:07.142293 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:35:07 crc kubenswrapper[4688]: I0123 18:35:07.144977 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" exitCode=0 Jan 23 18:35:07 crc kubenswrapper[4688]: I0123 18:35:07.145034 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58"} Jan 23 18:35:07 crc kubenswrapper[4688]: I0123 18:35:07.145202 4688 scope.go:117] "RemoveContainer" containerID="efff7e73d0e1ac0534ebe075a3a122ddc634e7b49a03f861c06609aa4fb7858e" Jan 23 18:35:08 crc kubenswrapper[4688]: I0123 18:35:08.160567 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:35:08 crc kubenswrapper[4688]: E0123 18:35:08.160999 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:35:21 crc kubenswrapper[4688]: I0123 18:35:21.357006 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:35:21 crc kubenswrapper[4688]: E0123 18:35:21.357830 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:35:33 crc kubenswrapper[4688]: I0123 18:35:33.357041 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:35:33 crc kubenswrapper[4688]: E0123 18:35:33.357936 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:35:48 crc kubenswrapper[4688]: I0123 18:35:48.356935 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:35:48 crc kubenswrapper[4688]: E0123 18:35:48.357685 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:36:00 crc kubenswrapper[4688]: I0123 18:36:00.356863 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:36:00 crc kubenswrapper[4688]: E0123 18:36:00.357795 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.046364 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-n4xx6"] Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.062543 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c94d-account-create-update-gbvch"] Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.074159 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d6d4-account-create-update-xskbs"] Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.084506 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d6d4-account-create-update-xskbs"] Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.094293 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c94d-account-create-update-gbvch"] Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.106117 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-n4xx6"] Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.368533 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b0f293-af50-4dda-9036-2247836670da" path="/var/lib/kubelet/pods/50b0f293-af50-4dda-9036-2247836670da/volumes" Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.370486 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dffca7-1daa-4e5f-ba64-a2dbfac4e428" path="/var/lib/kubelet/pods/56dffca7-1daa-4e5f-ba64-a2dbfac4e428/volumes" Jan 23 18:36:05 crc kubenswrapper[4688]: I0123 18:36:05.371879 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54e40d0-f93c-42db-9efd-f53e6c26730d" path="/var/lib/kubelet/pods/e54e40d0-f93c-42db-9efd-f53e6c26730d/volumes" Jan 23 18:36:06 crc kubenswrapper[4688]: I0123 18:36:06.029701 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nlmgx"] Jan 23 18:36:06 crc kubenswrapper[4688]: I0123 18:36:06.063653 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nlmgx"] Jan 23 18:36:07 crc kubenswrapper[4688]: I0123 18:36:07.369959 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6822fdf0-3b76-48d1-92c5-0a6a31f12ae4" path="/var/lib/kubelet/pods/6822fdf0-3b76-48d1-92c5-0a6a31f12ae4/volumes" Jan 23 18:36:14 crc kubenswrapper[4688]: I0123 18:36:14.356929 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:36:14 crc kubenswrapper[4688]: E0123 18:36:14.357893 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:36:16 crc kubenswrapper[4688]: I0123 18:36:16.039524 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nxsr6"] Jan 23 18:36:16 crc kubenswrapper[4688]: I0123 18:36:16.050079 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nxsr6"] Jan 23 18:36:17 crc kubenswrapper[4688]: I0123 18:36:17.371365 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798fc77a-0ff3-414c-91e1-d747b952faa2" path="/var/lib/kubelet/pods/798fc77a-0ff3-414c-91e1-d747b952faa2/volumes" Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.083790 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e0dd-account-create-update-wjhkg"] Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.093942 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-82747"] Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.102821 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-xrfv5"] Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.112593 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-0e51-account-create-update-srsmf"] Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.123637 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-xrfv5"] Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.135895 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e0dd-account-create-update-wjhkg"] Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.148260 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-82747"] Jan 23 18:36:20 crc kubenswrapper[4688]: I0123 18:36:20.161490 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-0e51-account-create-update-srsmf"] Jan 23 18:36:21 crc kubenswrapper[4688]: I0123 18:36:21.370531 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524e08b9-7bbd-4e77-b8ab-901c43fd8283" path="/var/lib/kubelet/pods/524e08b9-7bbd-4e77-b8ab-901c43fd8283/volumes" Jan 23 18:36:21 crc kubenswrapper[4688]: I0123 18:36:21.372412 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5e7058-06e1-4c31-b185-61f48f8bd166" path="/var/lib/kubelet/pods/5c5e7058-06e1-4c31-b185-61f48f8bd166/volumes" Jan 23 18:36:21 crc kubenswrapper[4688]: I0123 18:36:21.373632 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f04f7e-bee5-4db9-af24-fef76cd579a4" path="/var/lib/kubelet/pods/66f04f7e-bee5-4db9-af24-fef76cd579a4/volumes" Jan 23 18:36:21 crc kubenswrapper[4688]: I0123 18:36:21.375500 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e2bac7-43b6-484f-af41-54ebc8205242" path="/var/lib/kubelet/pods/c0e2bac7-43b6-484f-af41-54ebc8205242/volumes" Jan 23 18:36:29 crc kubenswrapper[4688]: I0123 18:36:29.356702 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:36:29 crc kubenswrapper[4688]: E0123 18:36:29.357936 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:36:42 crc kubenswrapper[4688]: I0123 18:36:42.357738 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:36:42 crc kubenswrapper[4688]: E0123 18:36:42.358971 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:36:47 crc kubenswrapper[4688]: I0123 18:36:47.103430 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a15b-account-create-update-cgc5k"] Jan 23 18:36:47 crc kubenswrapper[4688]: I0123 18:36:47.116198 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a15b-account-create-update-cgc5k"] Jan 23 18:36:47 crc kubenswrapper[4688]: I0123 18:36:47.369619 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56e3474-2934-4305-8ebf-353db7dbc00a" path="/var/lib/kubelet/pods/e56e3474-2934-4305-8ebf-353db7dbc00a/volumes" Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.048542 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2wc55"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.064518 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9367-account-create-update-j4flr"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.095276 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2wc55"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.114575 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-26sbc"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.129414 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-260f-account-create-update-8zf7b"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.138656 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-zlf47"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.150362 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9367-account-create-update-j4flr"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.160526 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-260f-account-create-update-8zf7b"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.171412 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-26sbc"] Jan 23 18:36:48 crc kubenswrapper[4688]: I0123 18:36:48.180027 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-zlf47"] Jan 23 18:36:49 crc kubenswrapper[4688]: I0123 18:36:49.371065 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0a1072-51bd-47a1-a3e0-740f34f179c3" path="/var/lib/kubelet/pods/1f0a1072-51bd-47a1-a3e0-740f34f179c3/volumes" Jan 23 18:36:49 crc kubenswrapper[4688]: I0123 18:36:49.372778 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265a42d2-70db-43df-a5bf-99a70bfed1cb" path="/var/lib/kubelet/pods/265a42d2-70db-43df-a5bf-99a70bfed1cb/volumes" Jan 23 18:36:49 crc kubenswrapper[4688]: I0123 18:36:49.374408 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef29235-1e3f-4732-9770-24cf93856028" path="/var/lib/kubelet/pods/8ef29235-1e3f-4732-9770-24cf93856028/volumes" Jan 23 18:36:49 crc kubenswrapper[4688]: I0123 18:36:49.375787 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a470f046-5473-4e59-9bb1-19eea38494e9" path="/var/lib/kubelet/pods/a470f046-5473-4e59-9bb1-19eea38494e9/volumes" Jan 23 18:36:49 crc kubenswrapper[4688]: I0123 18:36:49.377412 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce28bac3-dbde-4da0-82bc-60d85b10aec9" path="/var/lib/kubelet/pods/ce28bac3-dbde-4da0-82bc-60d85b10aec9/volumes" Jan 23 18:36:53 crc kubenswrapper[4688]: I0123 18:36:53.357753 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:36:53 crc kubenswrapper[4688]: E0123 18:36:53.358659 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.004550 4688 scope.go:117] "RemoveContainer" containerID="d6f6b5a36a1b6cdafc898d6d21bb11eeb00a86b94adeb2b2208e3a8e3eb189ba" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.063908 4688 scope.go:117] "RemoveContainer" containerID="e19e1bee8992c2b6ffc64da691b98d4576bd662091a576c1992c5ac2f7aaaeba" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.113125 4688 scope.go:117] "RemoveContainer" containerID="699f940ec7a41e2912d2fc73d69bcbae46459e1c9f12cc086bba4cd5530824e1" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.179391 4688 scope.go:117] "RemoveContainer" containerID="0ca2d6325783c894dd4bab5e5f45e54367a1b60ad0c99ace72905f48a4d290cc" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.217368 4688 scope.go:117] "RemoveContainer" containerID="ffabfd97b87e48d87c24901365ea4b502490159a16f398e49dcfadbea1c36042" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.258246 4688 scope.go:117] "RemoveContainer" containerID="87d3014248fb5e3be16e492a5ffdfb790086341fc372322827d896207bbacbd4" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.308826 4688 scope.go:117] "RemoveContainer" containerID="44403079eba77be7158fa23173b4f341ec6b2eb0eb5eaba6c42d18a8242f4dde" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.334731 4688 scope.go:117] "RemoveContainer" containerID="55f3f1d05edb20baf39e8880bc576fb22722a09291fddae8e20128c31dd602e3" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.375688 4688 scope.go:117] "RemoveContainer" containerID="396cef205752ceb1d27d7e34a9542203e0b70518485963c66db18fae9e06a4ab" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.412401 4688 scope.go:117] "RemoveContainer" containerID="bef3f022e90cf656ff2ebab7c2bac4748c7db573642ec397d898e099adfb5c00" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.437041 4688 scope.go:117] "RemoveContainer" containerID="fa77c1486e66af65f8a95b90c550baafcaa3929dee8614248b622c5b45a96fcd" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.468916 4688 scope.go:117] "RemoveContainer" containerID="763647860de8d45cecf5788b54ff28c4d9c15102d752708bce6aa0e38b5388b0" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.504018 4688 scope.go:117] "RemoveContainer" containerID="35f4c055174ae464b8c87324dd32b8f91aad2c998bc4498b628a5af317e6343b" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.538622 4688 scope.go:117] "RemoveContainer" containerID="ff7f7b9767f65aac9b4b3d3c2b52509bf239f9eb735e64ba3f49157a4e82751a" Jan 23 18:36:55 crc kubenswrapper[4688]: I0123 18:36:55.565829 4688 scope.go:117] "RemoveContainer" containerID="d7538b29e36f37dfd4cd6e91ea157443e3e4d43d69d205c40fdd4c0700bfbbe6" Jan 23 18:37:05 crc kubenswrapper[4688]: I0123 18:37:05.370466 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:37:05 crc kubenswrapper[4688]: E0123 18:37:05.371750 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:37:17 crc kubenswrapper[4688]: I0123 18:37:17.357887 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:37:17 crc kubenswrapper[4688]: E0123 18:37:17.359834 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:37:18 crc kubenswrapper[4688]: I0123 18:37:18.048480 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-twr6s"] Jan 23 18:37:18 crc kubenswrapper[4688]: I0123 18:37:18.065997 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-twr6s"] Jan 23 18:37:19 crc kubenswrapper[4688]: I0123 18:37:19.368967 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe" path="/var/lib/kubelet/pods/4a3cbc90-d5d6-4b2f-9fef-ec13207a45fe/volumes" Jan 23 18:37:28 crc kubenswrapper[4688]: I0123 18:37:28.843905 4688 generic.go:334] "Generic (PLEG): container finished" podID="fcefed39-8bf9-4782-8262-6616eee522f6" containerID="0284e68a5a931228c1e07ea4e3ea0ce55242ccbbc0fb367ee0e0739373b9374c" exitCode=0 Jan 23 18:37:28 crc kubenswrapper[4688]: I0123 18:37:28.843986 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" event={"ID":"fcefed39-8bf9-4782-8262-6616eee522f6","Type":"ContainerDied","Data":"0284e68a5a931228c1e07ea4e3ea0ce55242ccbbc0fb367ee0e0739373b9374c"} Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.357433 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:37:30 crc kubenswrapper[4688]: E0123 18:37:30.358365 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.358979 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.423777 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-ssh-key-openstack-edpm-ipam\") pod \"fcefed39-8bf9-4782-8262-6616eee522f6\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.423934 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-bootstrap-combined-ca-bundle\") pod \"fcefed39-8bf9-4782-8262-6616eee522f6\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.424033 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9rkb\" (UniqueName: \"kubernetes.io/projected/fcefed39-8bf9-4782-8262-6616eee522f6-kube-api-access-l9rkb\") pod \"fcefed39-8bf9-4782-8262-6616eee522f6\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.424175 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-inventory\") pod \"fcefed39-8bf9-4782-8262-6616eee522f6\" (UID: \"fcefed39-8bf9-4782-8262-6616eee522f6\") " Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.433373 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcefed39-8bf9-4782-8262-6616eee522f6-kube-api-access-l9rkb" (OuterVolumeSpecName: "kube-api-access-l9rkb") pod "fcefed39-8bf9-4782-8262-6616eee522f6" (UID: "fcefed39-8bf9-4782-8262-6616eee522f6"). InnerVolumeSpecName "kube-api-access-l9rkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.447376 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fcefed39-8bf9-4782-8262-6616eee522f6" (UID: "fcefed39-8bf9-4782-8262-6616eee522f6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.465743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-inventory" (OuterVolumeSpecName: "inventory") pod "fcefed39-8bf9-4782-8262-6616eee522f6" (UID: "fcefed39-8bf9-4782-8262-6616eee522f6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.482374 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fcefed39-8bf9-4782-8262-6616eee522f6" (UID: "fcefed39-8bf9-4782-8262-6616eee522f6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.527141 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.527478 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.527578 4688 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcefed39-8bf9-4782-8262-6616eee522f6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.527662 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9rkb\" (UniqueName: \"kubernetes.io/projected/fcefed39-8bf9-4782-8262-6616eee522f6-kube-api-access-l9rkb\") on node \"crc\" DevicePath \"\"" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.868106 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" event={"ID":"fcefed39-8bf9-4782-8262-6616eee522f6","Type":"ContainerDied","Data":"1154ff7582676aed091b228c18f08b5394024d18a65499e717c80367e5f5b418"} Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.868152 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1154ff7582676aed091b228c18f08b5394024d18a65499e717c80367e5f5b418" Jan 23 18:37:30 crc kubenswrapper[4688]: I0123 18:37:30.868260 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.002139 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l"] Jan 23 18:37:31 crc kubenswrapper[4688]: E0123 18:37:31.002645 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcefed39-8bf9-4782-8262-6616eee522f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.002662 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcefed39-8bf9-4782-8262-6616eee522f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.002869 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcefed39-8bf9-4782-8262-6616eee522f6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.003637 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.008037 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.008279 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.008331 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.010398 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.025169 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l"] Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.158563 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28nvm\" (UniqueName: \"kubernetes.io/projected/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-kube-api-access-28nvm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.159060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.159132 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.261860 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.261920 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.261982 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28nvm\" (UniqueName: \"kubernetes.io/projected/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-kube-api-access-28nvm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.266795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.271029 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.287132 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28nvm\" (UniqueName: \"kubernetes.io/projected/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-kube-api-access-28nvm\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.319593 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.866722 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l"] Jan 23 18:37:31 crc kubenswrapper[4688]: I0123 18:37:31.879535 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" event={"ID":"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e","Type":"ContainerStarted","Data":"668765c6ae0aa45d0adad62f2a61badb5b205f8503a3e01726923221019c2055"} Jan 23 18:37:32 crc kubenswrapper[4688]: I0123 18:37:32.892023 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" event={"ID":"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e","Type":"ContainerStarted","Data":"15f5303616e01aa09b7dfe4772370820f23542df38b522780648cad699a15510"} Jan 23 18:37:32 crc kubenswrapper[4688]: I0123 18:37:32.912576 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" podStartSLOduration=2.497471569 podStartE2EDuration="2.91255394s" podCreationTimestamp="2026-01-23 18:37:30 +0000 UTC" firstStartedPulling="2026-01-23 18:37:31.871461405 +0000 UTC m=+1846.867285846" lastFinishedPulling="2026-01-23 18:37:32.286543776 +0000 UTC m=+1847.282368217" observedRunningTime="2026-01-23 18:37:32.906399463 +0000 UTC m=+1847.902223914" watchObservedRunningTime="2026-01-23 18:37:32.91255394 +0000 UTC m=+1847.908378381" Jan 23 18:37:42 crc kubenswrapper[4688]: I0123 18:37:42.356958 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:37:42 crc kubenswrapper[4688]: E0123 18:37:42.358171 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:37:51 crc kubenswrapper[4688]: I0123 18:37:51.048486 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-94mh9"] Jan 23 18:37:51 crc kubenswrapper[4688]: I0123 18:37:51.059811 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wcz56"] Jan 23 18:37:51 crc kubenswrapper[4688]: I0123 18:37:51.073807 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-94mh9"] Jan 23 18:37:51 crc kubenswrapper[4688]: I0123 18:37:51.084242 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wcz56"] Jan 23 18:37:51 crc kubenswrapper[4688]: I0123 18:37:51.373505 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620ac0a5-247a-4207-83e0-d6776834d4ad" path="/var/lib/kubelet/pods/620ac0a5-247a-4207-83e0-d6776834d4ad/volumes" Jan 23 18:37:51 crc kubenswrapper[4688]: I0123 18:37:51.376026 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea982eec-acb6-45c7-8f69-36df2323747c" path="/var/lib/kubelet/pods/ea982eec-acb6-45c7-8f69-36df2323747c/volumes" Jan 23 18:37:53 crc kubenswrapper[4688]: I0123 18:37:53.357148 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:37:53 crc kubenswrapper[4688]: E0123 18:37:53.358081 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:37:55 crc kubenswrapper[4688]: I0123 18:37:55.938346 4688 scope.go:117] "RemoveContainer" containerID="2f82f78969f6a901660fff71f27b953a7917931860e7f1b20bfaaa60c737f518" Jan 23 18:37:56 crc kubenswrapper[4688]: I0123 18:37:56.000911 4688 scope.go:117] "RemoveContainer" containerID="f42b7e77fb6c22271ab3fd2c8a41bb234e30a210d262dce7445ac71435e65202" Jan 23 18:37:56 crc kubenswrapper[4688]: I0123 18:37:56.078158 4688 scope.go:117] "RemoveContainer" containerID="f2bd73f8aadf30071096c98a62dd31573c124f2a1985baff609e903d5d7f7172" Jan 23 18:38:03 crc kubenswrapper[4688]: I0123 18:38:03.051524 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6fttt"] Jan 23 18:38:03 crc kubenswrapper[4688]: I0123 18:38:03.064137 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6fttt"] Jan 23 18:38:03 crc kubenswrapper[4688]: I0123 18:38:03.370323 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa85f4c3-ac71-4df0-be19-d498bad38459" path="/var/lib/kubelet/pods/fa85f4c3-ac71-4df0-be19-d498bad38459/volumes" Jan 23 18:38:04 crc kubenswrapper[4688]: I0123 18:38:04.027384 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-m28xl"] Jan 23 18:38:04 crc kubenswrapper[4688]: I0123 18:38:04.035025 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-m28xl"] Jan 23 18:38:05 crc kubenswrapper[4688]: I0123 18:38:05.084834 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9lvbq"] Jan 23 18:38:05 crc kubenswrapper[4688]: I0123 18:38:05.105099 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9lvbq"] Jan 23 18:38:05 crc kubenswrapper[4688]: I0123 18:38:05.374667 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e41e2a-24eb-4116-8a8a-35e34558ec71" path="/var/lib/kubelet/pods/31e41e2a-24eb-4116-8a8a-35e34558ec71/volumes" Jan 23 18:38:05 crc kubenswrapper[4688]: I0123 18:38:05.376688 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7226bf67-7adb-4ce2-b595-957d81002a96" path="/var/lib/kubelet/pods/7226bf67-7adb-4ce2-b595-957d81002a96/volumes" Jan 23 18:38:06 crc kubenswrapper[4688]: I0123 18:38:06.356950 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:38:06 crc kubenswrapper[4688]: E0123 18:38:06.357372 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:38:17 crc kubenswrapper[4688]: I0123 18:38:17.356886 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:38:17 crc kubenswrapper[4688]: E0123 18:38:17.357877 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:38:21 crc kubenswrapper[4688]: I0123 18:38:21.064412 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xmgh7"] Jan 23 18:38:21 crc kubenswrapper[4688]: I0123 18:38:21.077129 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xmgh7"] Jan 23 18:38:21 crc kubenswrapper[4688]: I0123 18:38:21.371108 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc227102-c953-4a8b-bfc2-918b63e457c1" path="/var/lib/kubelet/pods/fc227102-c953-4a8b-bfc2-918b63e457c1/volumes" Jan 23 18:38:25 crc kubenswrapper[4688]: I0123 18:38:25.036146 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vsp8t"] Jan 23 18:38:25 crc kubenswrapper[4688]: I0123 18:38:25.047475 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vsp8t"] Jan 23 18:38:25 crc kubenswrapper[4688]: I0123 18:38:25.369772 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d25eb5-0041-42b6-8b61-ad9e728c3049" path="/var/lib/kubelet/pods/b8d25eb5-0041-42b6-8b61-ad9e728c3049/volumes" Jan 23 18:38:31 crc kubenswrapper[4688]: I0123 18:38:31.357022 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:38:31 crc kubenswrapper[4688]: E0123 18:38:31.357873 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:38:44 crc kubenswrapper[4688]: I0123 18:38:44.356176 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:38:44 crc kubenswrapper[4688]: E0123 18:38:44.357300 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:38:56 crc kubenswrapper[4688]: I0123 18:38:56.194765 4688 scope.go:117] "RemoveContainer" containerID="bc7a92edcac4ed02f5d24ee15d4472bf7251e23f1eff22778b985381b2f8da96" Jan 23 18:38:56 crc kubenswrapper[4688]: I0123 18:38:56.230386 4688 scope.go:117] "RemoveContainer" containerID="d9ac0803562b6b8420a419dbd19913965963fed62df14784880968613cc21b36" Jan 23 18:38:56 crc kubenswrapper[4688]: I0123 18:38:56.281305 4688 scope.go:117] "RemoveContainer" containerID="ca064dbf5bc08e7134acd87df534397b271e4dcb3e7ae009f5374fc5de39b9e5" Jan 23 18:38:56 crc kubenswrapper[4688]: I0123 18:38:56.323146 4688 scope.go:117] "RemoveContainer" containerID="356b4164f0ea8137384f762b11a26da39f79f0cbd7592fd69b395ce91bbe8925" Jan 23 18:38:56 crc kubenswrapper[4688]: I0123 18:38:56.359307 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:38:56 crc kubenswrapper[4688]: E0123 18:38:56.359657 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:38:56 crc kubenswrapper[4688]: I0123 18:38:56.440943 4688 scope.go:117] "RemoveContainer" containerID="981ba849cc6952d6d50d67b9dd1872de9bbbc764ac40c171480863f34d78f347" Jan 23 18:39:07 crc kubenswrapper[4688]: I0123 18:39:07.357281 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:39:07 crc kubenswrapper[4688]: E0123 18:39:07.358549 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:39:18 crc kubenswrapper[4688]: I0123 18:39:18.357611 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:39:18 crc kubenswrapper[4688]: E0123 18:39:18.358355 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:39:20 crc kubenswrapper[4688]: I0123 18:39:20.045042 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7c8c4"] Jan 23 18:39:20 crc kubenswrapper[4688]: I0123 18:39:20.062478 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pwmxl"] Jan 23 18:39:20 crc kubenswrapper[4688]: I0123 18:39:20.071684 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7c8c4"] Jan 23 18:39:20 crc kubenswrapper[4688]: I0123 18:39:20.080222 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pwmxl"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.033881 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a7dd-account-create-update-wqjfn"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.063741 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-d5drx"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.087632 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-a582-account-create-update-x7kt9"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.103960 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a7dd-account-create-update-wqjfn"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.111805 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9d4c-account-create-update-cl2rb"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.121034 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-d5drx"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.127831 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-a582-account-create-update-x7kt9"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.134740 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9d4c-account-create-update-cl2rb"] Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.370956 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ce47c8-d819-4faf-822d-7aa80bd1eb9d" path="/var/lib/kubelet/pods/39ce47c8-d819-4faf-822d-7aa80bd1eb9d/volumes" Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.371595 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656e3bd1-7057-486b-aa8d-98df6462e588" path="/var/lib/kubelet/pods/656e3bd1-7057-486b-aa8d-98df6462e588/volumes" Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.373535 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b04947b-c624-4375-805e-43988d26b5aa" path="/var/lib/kubelet/pods/7b04947b-c624-4375-805e-43988d26b5aa/volumes" Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.374671 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb41621-9757-493e-8164-6822693e8106" path="/var/lib/kubelet/pods/9cb41621-9757-493e-8164-6822693e8106/volumes" Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.375720 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb07c9fd-8a23-4726-8825-2c877f74f27c" path="/var/lib/kubelet/pods/bb07c9fd-8a23-4726-8825-2c877f74f27c/volumes" Jan 23 18:39:21 crc kubenswrapper[4688]: I0123 18:39:21.376702 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c823d536-422a-4bf8-9959-741070231ff4" path="/var/lib/kubelet/pods/c823d536-422a-4bf8-9959-741070231ff4/volumes" Jan 23 18:39:29 crc kubenswrapper[4688]: I0123 18:39:29.262610 4688 generic.go:334] "Generic (PLEG): container finished" podID="0db8a4c7-1a83-44a3-a9b9-73868a2fe73e" containerID="15f5303616e01aa09b7dfe4772370820f23542df38b522780648cad699a15510" exitCode=0 Jan 23 18:39:29 crc kubenswrapper[4688]: I0123 18:39:29.262695 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" event={"ID":"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e","Type":"ContainerDied","Data":"15f5303616e01aa09b7dfe4772370820f23542df38b522780648cad699a15510"} Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.728331 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.790786 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-inventory\") pod \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.790845 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28nvm\" (UniqueName: \"kubernetes.io/projected/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-kube-api-access-28nvm\") pod \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.791023 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-ssh-key-openstack-edpm-ipam\") pod \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\" (UID: \"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e\") " Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.797666 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-kube-api-access-28nvm" (OuterVolumeSpecName: "kube-api-access-28nvm") pod "0db8a4c7-1a83-44a3-a9b9-73868a2fe73e" (UID: "0db8a4c7-1a83-44a3-a9b9-73868a2fe73e"). InnerVolumeSpecName "kube-api-access-28nvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.822911 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-inventory" (OuterVolumeSpecName: "inventory") pod "0db8a4c7-1a83-44a3-a9b9-73868a2fe73e" (UID: "0db8a4c7-1a83-44a3-a9b9-73868a2fe73e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.829959 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0db8a4c7-1a83-44a3-a9b9-73868a2fe73e" (UID: "0db8a4c7-1a83-44a3-a9b9-73868a2fe73e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.894036 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.894081 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28nvm\" (UniqueName: \"kubernetes.io/projected/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-kube-api-access-28nvm\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:30 crc kubenswrapper[4688]: I0123 18:39:30.894097 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0db8a4c7-1a83-44a3-a9b9-73868a2fe73e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.292911 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" event={"ID":"0db8a4c7-1a83-44a3-a9b9-73868a2fe73e","Type":"ContainerDied","Data":"668765c6ae0aa45d0adad62f2a61badb5b205f8503a3e01726923221019c2055"} Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.293313 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="668765c6ae0aa45d0adad62f2a61badb5b205f8503a3e01726923221019c2055" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.293280 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.357700 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:39:31 crc kubenswrapper[4688]: E0123 18:39:31.358022 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.403306 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb"] Jan 23 18:39:31 crc kubenswrapper[4688]: E0123 18:39:31.404065 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db8a4c7-1a83-44a3-a9b9-73868a2fe73e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.404094 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db8a4c7-1a83-44a3-a9b9-73868a2fe73e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.404443 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db8a4c7-1a83-44a3-a9b9-73868a2fe73e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.405603 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.410802 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.411150 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.411348 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.413577 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.417954 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb"] Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.507151 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.507261 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.507315 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7n5f\" (UniqueName: \"kubernetes.io/projected/fc079b17-fa36-4e19-aac7-b8c309fa77e1-kube-api-access-q7n5f\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.608885 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.608946 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.609000 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7n5f\" (UniqueName: \"kubernetes.io/projected/fc079b17-fa36-4e19-aac7-b8c309fa77e1-kube-api-access-q7n5f\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.613785 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.626459 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.627317 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7n5f\" (UniqueName: \"kubernetes.io/projected/fc079b17-fa36-4e19-aac7-b8c309fa77e1-kube-api-access-q7n5f\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:31 crc kubenswrapper[4688]: I0123 18:39:31.722664 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:39:32 crc kubenswrapper[4688]: I0123 18:39:32.343110 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb"] Jan 23 18:39:32 crc kubenswrapper[4688]: I0123 18:39:32.355347 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:39:33 crc kubenswrapper[4688]: I0123 18:39:33.330900 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" event={"ID":"fc079b17-fa36-4e19-aac7-b8c309fa77e1","Type":"ContainerStarted","Data":"c9936719c75455d7d5dc17daaa655743c105ebb67478e835e2c798e9af6a5875"} Jan 23 18:39:34 crc kubenswrapper[4688]: I0123 18:39:34.341512 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" event={"ID":"fc079b17-fa36-4e19-aac7-b8c309fa77e1","Type":"ContainerStarted","Data":"39f565bccfd484fe2a776ef0d93aa26076c72ab085fc52488ee89f8fc931653f"} Jan 23 18:39:34 crc kubenswrapper[4688]: I0123 18:39:34.435865 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" podStartSLOduration=2.680031456 podStartE2EDuration="3.435843042s" podCreationTimestamp="2026-01-23 18:39:31 +0000 UTC" firstStartedPulling="2026-01-23 18:39:32.355053921 +0000 UTC m=+1967.350878362" lastFinishedPulling="2026-01-23 18:39:33.110865507 +0000 UTC m=+1968.106689948" observedRunningTime="2026-01-23 18:39:34.426280817 +0000 UTC m=+1969.422105258" watchObservedRunningTime="2026-01-23 18:39:34.435843042 +0000 UTC m=+1969.431667483" Jan 23 18:39:43 crc kubenswrapper[4688]: I0123 18:39:43.356509 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:39:43 crc kubenswrapper[4688]: E0123 18:39:43.357540 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:39:56 crc kubenswrapper[4688]: I0123 18:39:56.576111 4688 scope.go:117] "RemoveContainer" containerID="4f25ed3ddb32ffa15900d1526c4e010ca3e8ccff8dd2e77dfb1dd697f8900004" Jan 23 18:39:56 crc kubenswrapper[4688]: I0123 18:39:56.614365 4688 scope.go:117] "RemoveContainer" containerID="5e36a4be3644b921e129cae4d97dbf336555ebef4a59317104608a4c070ecde2" Jan 23 18:39:56 crc kubenswrapper[4688]: I0123 18:39:56.664748 4688 scope.go:117] "RemoveContainer" containerID="6f96f2954d89021983ed9dbc411c1f7cd6b04f11c06146aa6000ea329be5f3b6" Jan 23 18:39:56 crc kubenswrapper[4688]: I0123 18:39:56.713980 4688 scope.go:117] "RemoveContainer" containerID="a68d12fae4044a7655dce5abfa8a7f9dd42de20e2d3c53afc1f43d604e4f93fe" Jan 23 18:39:56 crc kubenswrapper[4688]: I0123 18:39:56.769567 4688 scope.go:117] "RemoveContainer" containerID="57f8e4fb6d6d1022c1d0c71f0755b0505fe00ad90da59ce0779622b7b336a835" Jan 23 18:39:56 crc kubenswrapper[4688]: I0123 18:39:56.832986 4688 scope.go:117] "RemoveContainer" containerID="a2694b135b404918ecd9d4e96a9fbd15bc53c646e7e3293c9f7f838a9c52f5af" Jan 23 18:39:57 crc kubenswrapper[4688]: I0123 18:39:57.059506 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gf29"] Jan 23 18:39:57 crc kubenswrapper[4688]: I0123 18:39:57.073282 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-8gf29"] Jan 23 18:39:57 crc kubenswrapper[4688]: I0123 18:39:57.356390 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:39:57 crc kubenswrapper[4688]: E0123 18:39:57.356797 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:39:57 crc kubenswrapper[4688]: I0123 18:39:57.374686 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1483d3ee-c9ce-41d9-939c-caa781261c00" path="/var/lib/kubelet/pods/1483d3ee-c9ce-41d9-939c-caa781261c00/volumes" Jan 23 18:40:08 crc kubenswrapper[4688]: I0123 18:40:08.356438 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:40:08 crc kubenswrapper[4688]: I0123 18:40:08.687514 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"0c3fcf621a6c46a20d9b2fec75c482f369fa6bd4f6d78fbde617289edb9547a1"} Jan 23 18:40:23 crc kubenswrapper[4688]: I0123 18:40:23.060007 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9287c"] Jan 23 18:40:23 crc kubenswrapper[4688]: I0123 18:40:23.070725 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9287c"] Jan 23 18:40:23 crc kubenswrapper[4688]: I0123 18:40:23.369870 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df94e7f5-9c11-410b-9513-d4e3350e1d29" path="/var/lib/kubelet/pods/df94e7f5-9c11-410b-9513-d4e3350e1d29/volumes" Jan 23 18:40:30 crc kubenswrapper[4688]: I0123 18:40:30.032437 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tnsn2"] Jan 23 18:40:30 crc kubenswrapper[4688]: I0123 18:40:30.040601 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tnsn2"] Jan 23 18:40:31 crc kubenswrapper[4688]: I0123 18:40:31.367542 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab" path="/var/lib/kubelet/pods/1574ebc7-fb52-4eb0-88e9-d2115dfaf2ab/volumes" Jan 23 18:40:54 crc kubenswrapper[4688]: I0123 18:40:54.212443 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc079b17-fa36-4e19-aac7-b8c309fa77e1" containerID="39f565bccfd484fe2a776ef0d93aa26076c72ab085fc52488ee89f8fc931653f" exitCode=0 Jan 23 18:40:54 crc kubenswrapper[4688]: I0123 18:40:54.212518 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" event={"ID":"fc079b17-fa36-4e19-aac7-b8c309fa77e1","Type":"ContainerDied","Data":"39f565bccfd484fe2a776ef0d93aa26076c72ab085fc52488ee89f8fc931653f"} Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.709315 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.881150 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-ssh-key-openstack-edpm-ipam\") pod \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.881226 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7n5f\" (UniqueName: \"kubernetes.io/projected/fc079b17-fa36-4e19-aac7-b8c309fa77e1-kube-api-access-q7n5f\") pod \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.881311 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-inventory\") pod \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\" (UID: \"fc079b17-fa36-4e19-aac7-b8c309fa77e1\") " Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.891982 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc079b17-fa36-4e19-aac7-b8c309fa77e1-kube-api-access-q7n5f" (OuterVolumeSpecName: "kube-api-access-q7n5f") pod "fc079b17-fa36-4e19-aac7-b8c309fa77e1" (UID: "fc079b17-fa36-4e19-aac7-b8c309fa77e1"). InnerVolumeSpecName "kube-api-access-q7n5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.911474 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-inventory" (OuterVolumeSpecName: "inventory") pod "fc079b17-fa36-4e19-aac7-b8c309fa77e1" (UID: "fc079b17-fa36-4e19-aac7-b8c309fa77e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.921378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc079b17-fa36-4e19-aac7-b8c309fa77e1" (UID: "fc079b17-fa36-4e19-aac7-b8c309fa77e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.983771 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.983813 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7n5f\" (UniqueName: \"kubernetes.io/projected/fc079b17-fa36-4e19-aac7-b8c309fa77e1-kube-api-access-q7n5f\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:55 crc kubenswrapper[4688]: I0123 18:40:55.983823 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc079b17-fa36-4e19-aac7-b8c309fa77e1-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.241170 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" event={"ID":"fc079b17-fa36-4e19-aac7-b8c309fa77e1","Type":"ContainerDied","Data":"c9936719c75455d7d5dc17daaa655743c105ebb67478e835e2c798e9af6a5875"} Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.241237 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9936719c75455d7d5dc17daaa655743c105ebb67478e835e2c798e9af6a5875" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.241308 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.339879 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck"] Jan 23 18:40:56 crc kubenswrapper[4688]: E0123 18:40:56.340368 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc079b17-fa36-4e19-aac7-b8c309fa77e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.340387 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc079b17-fa36-4e19-aac7-b8c309fa77e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.340582 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc079b17-fa36-4e19-aac7-b8c309fa77e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.344108 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.347624 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.347641 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.347797 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.348006 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.353837 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck"] Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.493587 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.493824 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hrfr\" (UniqueName: \"kubernetes.io/projected/e744642f-69d6-47a9-83a8-2cc90a504000-kube-api-access-8hrfr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.493928 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.596586 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.597727 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrfr\" (UniqueName: \"kubernetes.io/projected/e744642f-69d6-47a9-83a8-2cc90a504000-kube-api-access-8hrfr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.598080 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.602133 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.602729 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.630760 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrfr\" (UniqueName: \"kubernetes.io/projected/e744642f-69d6-47a9-83a8-2cc90a504000-kube-api-access-8hrfr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b4nck\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.667116 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:40:56 crc kubenswrapper[4688]: I0123 18:40:56.982201 4688 scope.go:117] "RemoveContainer" containerID="35b77bb28805e9d7ad9b70aa1149b6d40234a7736a5cf7a58b3f6f80d6e940c7" Jan 23 18:40:57 crc kubenswrapper[4688]: I0123 18:40:57.037429 4688 scope.go:117] "RemoveContainer" containerID="87d68444f9ab664301455c7166f3f21f6146a91e7cf6b7a910a5c041f056d061" Jan 23 18:40:57 crc kubenswrapper[4688]: I0123 18:40:57.089091 4688 scope.go:117] "RemoveContainer" containerID="de483ea2cf0508da8a24bfa7431659d9cdf99e46759873822d340d4bef3be1b8" Jan 23 18:40:57 crc kubenswrapper[4688]: I0123 18:40:57.231516 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck"] Jan 23 18:40:57 crc kubenswrapper[4688]: I0123 18:40:57.255353 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" event={"ID":"e744642f-69d6-47a9-83a8-2cc90a504000","Type":"ContainerStarted","Data":"9b535557fbfab5979cfcfd680cd0b2d1b962923a5e2814e55beca0eece065f17"} Jan 23 18:40:59 crc kubenswrapper[4688]: I0123 18:40:59.273799 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" event={"ID":"e744642f-69d6-47a9-83a8-2cc90a504000","Type":"ContainerStarted","Data":"a74688dd6bccd2e3770ffe9b4851a0ecb45dc61891eb5f972ec043be43ab51ec"} Jan 23 18:40:59 crc kubenswrapper[4688]: I0123 18:40:59.294345 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" podStartSLOduration=1.931129286 podStartE2EDuration="3.294312642s" podCreationTimestamp="2026-01-23 18:40:56 +0000 UTC" firstStartedPulling="2026-01-23 18:40:57.241042134 +0000 UTC m=+2052.236866565" lastFinishedPulling="2026-01-23 18:40:58.60422548 +0000 UTC m=+2053.600049921" observedRunningTime="2026-01-23 18:40:59.288545697 +0000 UTC m=+2054.284370158" watchObservedRunningTime="2026-01-23 18:40:59.294312642 +0000 UTC m=+2054.290137083" Jan 23 18:41:04 crc kubenswrapper[4688]: I0123 18:41:04.333470 4688 generic.go:334] "Generic (PLEG): container finished" podID="e744642f-69d6-47a9-83a8-2cc90a504000" containerID="a74688dd6bccd2e3770ffe9b4851a0ecb45dc61891eb5f972ec043be43ab51ec" exitCode=0 Jan 23 18:41:04 crc kubenswrapper[4688]: I0123 18:41:04.333549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" event={"ID":"e744642f-69d6-47a9-83a8-2cc90a504000","Type":"ContainerDied","Data":"a74688dd6bccd2e3770ffe9b4851a0ecb45dc61891eb5f972ec043be43ab51ec"} Jan 23 18:41:05 crc kubenswrapper[4688]: I0123 18:41:05.763684 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:41:05 crc kubenswrapper[4688]: I0123 18:41:05.956085 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hrfr\" (UniqueName: \"kubernetes.io/projected/e744642f-69d6-47a9-83a8-2cc90a504000-kube-api-access-8hrfr\") pod \"e744642f-69d6-47a9-83a8-2cc90a504000\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " Jan 23 18:41:05 crc kubenswrapper[4688]: I0123 18:41:05.956552 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-inventory\") pod \"e744642f-69d6-47a9-83a8-2cc90a504000\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " Jan 23 18:41:05 crc kubenswrapper[4688]: I0123 18:41:05.956751 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-ssh-key-openstack-edpm-ipam\") pod \"e744642f-69d6-47a9-83a8-2cc90a504000\" (UID: \"e744642f-69d6-47a9-83a8-2cc90a504000\") " Jan 23 18:41:05 crc kubenswrapper[4688]: I0123 18:41:05.969412 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e744642f-69d6-47a9-83a8-2cc90a504000-kube-api-access-8hrfr" (OuterVolumeSpecName: "kube-api-access-8hrfr") pod "e744642f-69d6-47a9-83a8-2cc90a504000" (UID: "e744642f-69d6-47a9-83a8-2cc90a504000"). InnerVolumeSpecName "kube-api-access-8hrfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:41:05 crc kubenswrapper[4688]: I0123 18:41:05.990334 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e744642f-69d6-47a9-83a8-2cc90a504000" (UID: "e744642f-69d6-47a9-83a8-2cc90a504000"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:05 crc kubenswrapper[4688]: I0123 18:41:05.991369 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-inventory" (OuterVolumeSpecName: "inventory") pod "e744642f-69d6-47a9-83a8-2cc90a504000" (UID: "e744642f-69d6-47a9-83a8-2cc90a504000"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.061125 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.061159 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e744642f-69d6-47a9-83a8-2cc90a504000-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.061172 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hrfr\" (UniqueName: \"kubernetes.io/projected/e744642f-69d6-47a9-83a8-2cc90a504000-kube-api-access-8hrfr\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.352400 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" event={"ID":"e744642f-69d6-47a9-83a8-2cc90a504000","Type":"ContainerDied","Data":"9b535557fbfab5979cfcfd680cd0b2d1b962923a5e2814e55beca0eece065f17"} Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.352448 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b535557fbfab5979cfcfd680cd0b2d1b962923a5e2814e55beca0eece065f17" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.352472 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b4nck" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.437264 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn"] Jan 23 18:41:06 crc kubenswrapper[4688]: E0123 18:41:06.437848 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e744642f-69d6-47a9-83a8-2cc90a504000" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.437876 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e744642f-69d6-47a9-83a8-2cc90a504000" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.438418 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e744642f-69d6-47a9-83a8-2cc90a504000" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.439215 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.441695 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.441925 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.442122 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.442325 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.473494 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.474111 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr2w9\" (UniqueName: \"kubernetes.io/projected/e2222dda-2ac5-4212-9cb1-bb87bc961472-kube-api-access-xr2w9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.474637 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.619479 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.619581 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.619725 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr2w9\" (UniqueName: \"kubernetes.io/projected/e2222dda-2ac5-4212-9cb1-bb87bc961472-kube-api-access-xr2w9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.626366 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.628946 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.631411 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn"] Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.653587 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr2w9\" (UniqueName: \"kubernetes.io/projected/e2222dda-2ac5-4212-9cb1-bb87bc961472-kube-api-access-xr2w9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p52gn\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:06 crc kubenswrapper[4688]: I0123 18:41:06.715660 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:07 crc kubenswrapper[4688]: I0123 18:41:07.237808 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn"] Jan 23 18:41:07 crc kubenswrapper[4688]: W0123 18:41:07.252211 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2222dda_2ac5_4212_9cb1_bb87bc961472.slice/crio-3eb024704ef20de4eb2ce4845915b600e8de26b715042ba69823dea3641783c5 WatchSource:0}: Error finding container 3eb024704ef20de4eb2ce4845915b600e8de26b715042ba69823dea3641783c5: Status 404 returned error can't find the container with id 3eb024704ef20de4eb2ce4845915b600e8de26b715042ba69823dea3641783c5 Jan 23 18:41:07 crc kubenswrapper[4688]: I0123 18:41:07.373547 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" event={"ID":"e2222dda-2ac5-4212-9cb1-bb87bc961472","Type":"ContainerStarted","Data":"3eb024704ef20de4eb2ce4845915b600e8de26b715042ba69823dea3641783c5"} Jan 23 18:41:08 crc kubenswrapper[4688]: I0123 18:41:08.383091 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" event={"ID":"e2222dda-2ac5-4212-9cb1-bb87bc961472","Type":"ContainerStarted","Data":"14a86ef044cb0bd4f44383afccc3b8620b6d4411674edbdb700fe7b9a0feda7f"} Jan 23 18:41:08 crc kubenswrapper[4688]: I0123 18:41:08.414521 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" podStartSLOduration=1.963894243 podStartE2EDuration="2.414500743s" podCreationTimestamp="2026-01-23 18:41:06 +0000 UTC" firstStartedPulling="2026-01-23 18:41:07.257545743 +0000 UTC m=+2062.253370184" lastFinishedPulling="2026-01-23 18:41:07.708152233 +0000 UTC m=+2062.703976684" observedRunningTime="2026-01-23 18:41:08.413742101 +0000 UTC m=+2063.409566572" watchObservedRunningTime="2026-01-23 18:41:08.414500743 +0000 UTC m=+2063.410325214" Jan 23 18:41:12 crc kubenswrapper[4688]: I0123 18:41:12.047080 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7vfb"] Jan 23 18:41:12 crc kubenswrapper[4688]: I0123 18:41:12.060393 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7vfb"] Jan 23 18:41:13 crc kubenswrapper[4688]: I0123 18:41:13.368812 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82599c77-bc56-4a8b-a55a-e18645e80522" path="/var/lib/kubelet/pods/82599c77-bc56-4a8b-a55a-e18645e80522/volumes" Jan 23 18:41:52 crc kubenswrapper[4688]: I0123 18:41:52.847155 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2222dda-2ac5-4212-9cb1-bb87bc961472" containerID="14a86ef044cb0bd4f44383afccc3b8620b6d4411674edbdb700fe7b9a0feda7f" exitCode=0 Jan 23 18:41:52 crc kubenswrapper[4688]: I0123 18:41:52.847254 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" event={"ID":"e2222dda-2ac5-4212-9cb1-bb87bc961472","Type":"ContainerDied","Data":"14a86ef044cb0bd4f44383afccc3b8620b6d4411674edbdb700fe7b9a0feda7f"} Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.348389 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.538697 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-inventory\") pod \"e2222dda-2ac5-4212-9cb1-bb87bc961472\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.538854 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-ssh-key-openstack-edpm-ipam\") pod \"e2222dda-2ac5-4212-9cb1-bb87bc961472\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.538921 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr2w9\" (UniqueName: \"kubernetes.io/projected/e2222dda-2ac5-4212-9cb1-bb87bc961472-kube-api-access-xr2w9\") pod \"e2222dda-2ac5-4212-9cb1-bb87bc961472\" (UID: \"e2222dda-2ac5-4212-9cb1-bb87bc961472\") " Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.551925 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2222dda-2ac5-4212-9cb1-bb87bc961472-kube-api-access-xr2w9" (OuterVolumeSpecName: "kube-api-access-xr2w9") pod "e2222dda-2ac5-4212-9cb1-bb87bc961472" (UID: "e2222dda-2ac5-4212-9cb1-bb87bc961472"). InnerVolumeSpecName "kube-api-access-xr2w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.569873 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-inventory" (OuterVolumeSpecName: "inventory") pod "e2222dda-2ac5-4212-9cb1-bb87bc961472" (UID: "e2222dda-2ac5-4212-9cb1-bb87bc961472"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.574388 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e2222dda-2ac5-4212-9cb1-bb87bc961472" (UID: "e2222dda-2ac5-4212-9cb1-bb87bc961472"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.641899 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.641931 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr2w9\" (UniqueName: \"kubernetes.io/projected/e2222dda-2ac5-4212-9cb1-bb87bc961472-kube-api-access-xr2w9\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.641948 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2222dda-2ac5-4212-9cb1-bb87bc961472-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.867754 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" event={"ID":"e2222dda-2ac5-4212-9cb1-bb87bc961472","Type":"ContainerDied","Data":"3eb024704ef20de4eb2ce4845915b600e8de26b715042ba69823dea3641783c5"} Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.868098 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb024704ef20de4eb2ce4845915b600e8de26b715042ba69823dea3641783c5" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.867811 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p52gn" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.982775 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8"] Jan 23 18:41:54 crc kubenswrapper[4688]: E0123 18:41:54.983233 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2222dda-2ac5-4212-9cb1-bb87bc961472" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.983253 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2222dda-2ac5-4212-9cb1-bb87bc961472" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.983462 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2222dda-2ac5-4212-9cb1-bb87bc961472" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.984177 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.988876 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.990730 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.995141 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.996717 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8"] Jan 23 18:41:54 crc kubenswrapper[4688]: I0123 18:41:54.997016 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.053844 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w27zc\" (UniqueName: \"kubernetes.io/projected/45576589-fbbb-4556-9306-de4deba76388-kube-api-access-w27zc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.053913 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.054038 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.155606 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w27zc\" (UniqueName: \"kubernetes.io/projected/45576589-fbbb-4556-9306-de4deba76388-kube-api-access-w27zc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.155717 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.155865 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.160329 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.160850 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.175254 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w27zc\" (UniqueName: \"kubernetes.io/projected/45576589-fbbb-4556-9306-de4deba76388-kube-api-access-w27zc\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.301338 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:41:55 crc kubenswrapper[4688]: I0123 18:41:55.921425 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8"] Jan 23 18:41:56 crc kubenswrapper[4688]: I0123 18:41:56.890955 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" event={"ID":"45576589-fbbb-4556-9306-de4deba76388","Type":"ContainerStarted","Data":"78afd1be25a350bc1806ce8232fe94de25a1f3f21620863d3403af8a36039d11"} Jan 23 18:41:56 crc kubenswrapper[4688]: I0123 18:41:56.891271 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" event={"ID":"45576589-fbbb-4556-9306-de4deba76388","Type":"ContainerStarted","Data":"523718322e868a4515e9e2b4cab0a30689ea48e053e1470678cfe6de0a98fa23"} Jan 23 18:41:56 crc kubenswrapper[4688]: I0123 18:41:56.915925 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" podStartSLOduration=2.210769769 podStartE2EDuration="2.915900674s" podCreationTimestamp="2026-01-23 18:41:54 +0000 UTC" firstStartedPulling="2026-01-23 18:41:55.928155527 +0000 UTC m=+2110.923979968" lastFinishedPulling="2026-01-23 18:41:56.633286432 +0000 UTC m=+2111.629110873" observedRunningTime="2026-01-23 18:41:56.905967508 +0000 UTC m=+2111.901791959" watchObservedRunningTime="2026-01-23 18:41:56.915900674 +0000 UTC m=+2111.911725115" Jan 23 18:41:57 crc kubenswrapper[4688]: I0123 18:41:57.243521 4688 scope.go:117] "RemoveContainer" containerID="531a509f503e7e517945c6cc11c25f9659de7af2d1bb94ddd7920f1f0e9e443f" Jan 23 18:42:01 crc kubenswrapper[4688]: I0123 18:42:01.945252 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8nl8"] Jan 23 18:42:01 crc kubenswrapper[4688]: I0123 18:42:01.949819 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:01 crc kubenswrapper[4688]: I0123 18:42:01.961046 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8nl8"] Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.131306 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-catalog-content\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.131626 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8qn\" (UniqueName: \"kubernetes.io/projected/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-kube-api-access-wc8qn\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.131776 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-utilities\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.233442 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8qn\" (UniqueName: \"kubernetes.io/projected/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-kube-api-access-wc8qn\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.233526 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-utilities\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.233613 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-catalog-content\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.234100 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-utilities\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.234198 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-catalog-content\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.254910 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8qn\" (UniqueName: \"kubernetes.io/projected/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-kube-api-access-wc8qn\") pod \"redhat-operators-k8nl8\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.286654 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.793820 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8nl8"] Jan 23 18:42:02 crc kubenswrapper[4688]: I0123 18:42:02.987045 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8nl8" event={"ID":"6a6e4445-bfe4-446a-9cc6-3831ad0fef36","Type":"ContainerStarted","Data":"d22cb3aff403ae6a2392d6eee7c2cb10b994a819ced34c99a7802622bfa0fadf"} Jan 23 18:42:03 crc kubenswrapper[4688]: I0123 18:42:03.996788 4688 generic.go:334] "Generic (PLEG): container finished" podID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerID="1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5" exitCode=0 Jan 23 18:42:03 crc kubenswrapper[4688]: I0123 18:42:03.996897 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8nl8" event={"ID":"6a6e4445-bfe4-446a-9cc6-3831ad0fef36","Type":"ContainerDied","Data":"1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5"} Jan 23 18:42:06 crc kubenswrapper[4688]: I0123 18:42:06.030459 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8nl8" event={"ID":"6a6e4445-bfe4-446a-9cc6-3831ad0fef36","Type":"ContainerStarted","Data":"9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95"} Jan 23 18:42:12 crc kubenswrapper[4688]: I0123 18:42:12.095147 4688 generic.go:334] "Generic (PLEG): container finished" podID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerID="9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95" exitCode=0 Jan 23 18:42:12 crc kubenswrapper[4688]: I0123 18:42:12.095217 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8nl8" event={"ID":"6a6e4445-bfe4-446a-9cc6-3831ad0fef36","Type":"ContainerDied","Data":"9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95"} Jan 23 18:42:13 crc kubenswrapper[4688]: I0123 18:42:13.107762 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8nl8" event={"ID":"6a6e4445-bfe4-446a-9cc6-3831ad0fef36","Type":"ContainerStarted","Data":"29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46"} Jan 23 18:42:13 crc kubenswrapper[4688]: I0123 18:42:13.132839 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8nl8" podStartSLOduration=3.356377952 podStartE2EDuration="12.132800053s" podCreationTimestamp="2026-01-23 18:42:01 +0000 UTC" firstStartedPulling="2026-01-23 18:42:03.999217398 +0000 UTC m=+2118.995041839" lastFinishedPulling="2026-01-23 18:42:12.775639499 +0000 UTC m=+2127.771463940" observedRunningTime="2026-01-23 18:42:13.125159284 +0000 UTC m=+2128.120983725" watchObservedRunningTime="2026-01-23 18:42:13.132800053 +0000 UTC m=+2128.128624494" Jan 23 18:42:22 crc kubenswrapper[4688]: I0123 18:42:22.287351 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:22 crc kubenswrapper[4688]: I0123 18:42:22.287989 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:22 crc kubenswrapper[4688]: I0123 18:42:22.338810 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:23 crc kubenswrapper[4688]: I0123 18:42:23.242990 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:23 crc kubenswrapper[4688]: I0123 18:42:23.289870 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8nl8"] Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.213295 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k8nl8" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="registry-server" containerID="cri-o://29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46" gracePeriod=2 Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.738649 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.841637 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-utilities\") pod \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.841772 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-catalog-content\") pod \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.841919 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc8qn\" (UniqueName: \"kubernetes.io/projected/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-kube-api-access-wc8qn\") pod \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\" (UID: \"6a6e4445-bfe4-446a-9cc6-3831ad0fef36\") " Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.843319 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-utilities" (OuterVolumeSpecName: "utilities") pod "6a6e4445-bfe4-446a-9cc6-3831ad0fef36" (UID: "6a6e4445-bfe4-446a-9cc6-3831ad0fef36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.850487 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-kube-api-access-wc8qn" (OuterVolumeSpecName: "kube-api-access-wc8qn") pod "6a6e4445-bfe4-446a-9cc6-3831ad0fef36" (UID: "6a6e4445-bfe4-446a-9cc6-3831ad0fef36"). InnerVolumeSpecName "kube-api-access-wc8qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.944494 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.944783 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc8qn\" (UniqueName: \"kubernetes.io/projected/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-kube-api-access-wc8qn\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:25 crc kubenswrapper[4688]: I0123 18:42:25.973385 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a6e4445-bfe4-446a-9cc6-3831ad0fef36" (UID: "6a6e4445-bfe4-446a-9cc6-3831ad0fef36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.046518 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6e4445-bfe4-446a-9cc6-3831ad0fef36-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.231085 4688 generic.go:334] "Generic (PLEG): container finished" podID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerID="29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46" exitCode=0 Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.231142 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8nl8" event={"ID":"6a6e4445-bfe4-446a-9cc6-3831ad0fef36","Type":"ContainerDied","Data":"29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46"} Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.231179 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8nl8" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.231233 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8nl8" event={"ID":"6a6e4445-bfe4-446a-9cc6-3831ad0fef36","Type":"ContainerDied","Data":"d22cb3aff403ae6a2392d6eee7c2cb10b994a819ced34c99a7802622bfa0fadf"} Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.231288 4688 scope.go:117] "RemoveContainer" containerID="29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.273407 4688 scope.go:117] "RemoveContainer" containerID="9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.280997 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8nl8"] Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.291555 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k8nl8"] Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.309519 4688 scope.go:117] "RemoveContainer" containerID="1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.354997 4688 scope.go:117] "RemoveContainer" containerID="29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46" Jan 23 18:42:26 crc kubenswrapper[4688]: E0123 18:42:26.355615 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46\": container with ID starting with 29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46 not found: ID does not exist" containerID="29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.355676 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46"} err="failed to get container status \"29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46\": rpc error: code = NotFound desc = could not find container \"29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46\": container with ID starting with 29aec94af7448b9f647756a1fff14663b1d429a0f3c0e530eedd25e7e337fe46 not found: ID does not exist" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.355711 4688 scope.go:117] "RemoveContainer" containerID="9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95" Jan 23 18:42:26 crc kubenswrapper[4688]: E0123 18:42:26.356134 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95\": container with ID starting with 9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95 not found: ID does not exist" containerID="9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.356173 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95"} err="failed to get container status \"9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95\": rpc error: code = NotFound desc = could not find container \"9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95\": container with ID starting with 9ec5241c497c817e88b04c4660b5785cddfead068311c202bab45ad2ddb9da95 not found: ID does not exist" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.356266 4688 scope.go:117] "RemoveContainer" containerID="1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5" Jan 23 18:42:26 crc kubenswrapper[4688]: E0123 18:42:26.356641 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5\": container with ID starting with 1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5 not found: ID does not exist" containerID="1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5" Jan 23 18:42:26 crc kubenswrapper[4688]: I0123 18:42:26.356667 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5"} err="failed to get container status \"1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5\": rpc error: code = NotFound desc = could not find container \"1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5\": container with ID starting with 1b2bd8c26b7648fe57aa066d679c12b8e9e5776e9aa1d09a6ca59a3c9f63dcb5 not found: ID does not exist" Jan 23 18:42:27 crc kubenswrapper[4688]: I0123 18:42:27.370050 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" path="/var/lib/kubelet/pods/6a6e4445-bfe4-446a-9cc6-3831ad0fef36/volumes" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.275099 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvbdt"] Jan 23 18:42:31 crc kubenswrapper[4688]: E0123 18:42:31.276140 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="extract-utilities" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.276154 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="extract-utilities" Jan 23 18:42:31 crc kubenswrapper[4688]: E0123 18:42:31.276199 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="extract-content" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.276210 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="extract-content" Jan 23 18:42:31 crc kubenswrapper[4688]: E0123 18:42:31.276226 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="registry-server" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.276235 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="registry-server" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.276456 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6e4445-bfe4-446a-9cc6-3831ad0fef36" containerName="registry-server" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.278540 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.289099 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvbdt"] Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.359691 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6dl8\" (UniqueName: \"kubernetes.io/projected/362190be-dab9-4717-b199-39e109e48dd5-kube-api-access-q6dl8\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.359790 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-catalog-content\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.359926 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-utilities\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.462293 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6dl8\" (UniqueName: \"kubernetes.io/projected/362190be-dab9-4717-b199-39e109e48dd5-kube-api-access-q6dl8\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.462384 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-catalog-content\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.462485 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-utilities\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.463047 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-utilities\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.463199 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-catalog-content\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.486932 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6dl8\" (UniqueName: \"kubernetes.io/projected/362190be-dab9-4717-b199-39e109e48dd5-kube-api-access-q6dl8\") pod \"redhat-marketplace-hvbdt\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:31 crc kubenswrapper[4688]: I0123 18:42:31.629257 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:32 crc kubenswrapper[4688]: I0123 18:42:32.124132 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvbdt"] Jan 23 18:42:32 crc kubenswrapper[4688]: I0123 18:42:32.301048 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvbdt" event={"ID":"362190be-dab9-4717-b199-39e109e48dd5","Type":"ContainerStarted","Data":"0cb9a154113e30e64e6e0a0fe11e82f76ef2d1ccd1242e348b825ed03d32bb20"} Jan 23 18:42:33 crc kubenswrapper[4688]: I0123 18:42:33.310730 4688 generic.go:334] "Generic (PLEG): container finished" podID="362190be-dab9-4717-b199-39e109e48dd5" containerID="5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda" exitCode=0 Jan 23 18:42:33 crc kubenswrapper[4688]: I0123 18:42:33.310779 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvbdt" event={"ID":"362190be-dab9-4717-b199-39e109e48dd5","Type":"ContainerDied","Data":"5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda"} Jan 23 18:42:35 crc kubenswrapper[4688]: I0123 18:42:35.333479 4688 generic.go:334] "Generic (PLEG): container finished" podID="362190be-dab9-4717-b199-39e109e48dd5" containerID="3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f" exitCode=0 Jan 23 18:42:35 crc kubenswrapper[4688]: I0123 18:42:35.333596 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvbdt" event={"ID":"362190be-dab9-4717-b199-39e109e48dd5","Type":"ContainerDied","Data":"3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f"} Jan 23 18:42:36 crc kubenswrapper[4688]: I0123 18:42:36.345964 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvbdt" event={"ID":"362190be-dab9-4717-b199-39e109e48dd5","Type":"ContainerStarted","Data":"1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b"} Jan 23 18:42:36 crc kubenswrapper[4688]: I0123 18:42:36.381030 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvbdt" podStartSLOduration=2.901158058 podStartE2EDuration="5.381005481s" podCreationTimestamp="2026-01-23 18:42:31 +0000 UTC" firstStartedPulling="2026-01-23 18:42:33.313077445 +0000 UTC m=+2148.308901886" lastFinishedPulling="2026-01-23 18:42:35.792924868 +0000 UTC m=+2150.788749309" observedRunningTime="2026-01-23 18:42:36.371095907 +0000 UTC m=+2151.366920348" watchObservedRunningTime="2026-01-23 18:42:36.381005481 +0000 UTC m=+2151.376829932" Jan 23 18:42:36 crc kubenswrapper[4688]: I0123 18:42:36.965965 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:42:36 crc kubenswrapper[4688]: I0123 18:42:36.966095 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.368711 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6r58p"] Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.372299 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.386025 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6r58p"] Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.437022 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gw4r\" (UniqueName: \"kubernetes.io/projected/9ec7da5d-ed69-46d8-a674-c17f23e4196b-kube-api-access-8gw4r\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.438422 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-catalog-content\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.439238 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-utilities\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.542148 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gw4r\" (UniqueName: \"kubernetes.io/projected/9ec7da5d-ed69-46d8-a674-c17f23e4196b-kube-api-access-8gw4r\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.542562 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-catalog-content\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.542604 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-utilities\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.543077 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-catalog-content\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.543101 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-utilities\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.565434 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gw4r\" (UniqueName: \"kubernetes.io/projected/9ec7da5d-ed69-46d8-a674-c17f23e4196b-kube-api-access-8gw4r\") pod \"certified-operators-6r58p\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.630220 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.630267 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.681316 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:41 crc kubenswrapper[4688]: I0123 18:42:41.697338 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:42 crc kubenswrapper[4688]: I0123 18:42:42.238948 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6r58p"] Jan 23 18:42:42 crc kubenswrapper[4688]: I0123 18:42:42.412594 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6r58p" event={"ID":"9ec7da5d-ed69-46d8-a674-c17f23e4196b","Type":"ContainerStarted","Data":"4dcbf8e5c1ef41e504e62c2407cc9613ee5e2286c1b25b340d0b7fbbe23a9ecd"} Jan 23 18:42:42 crc kubenswrapper[4688]: I0123 18:42:42.473800 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:43 crc kubenswrapper[4688]: I0123 18:42:43.429860 4688 generic.go:334] "Generic (PLEG): container finished" podID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerID="aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440" exitCode=0 Jan 23 18:42:43 crc kubenswrapper[4688]: I0123 18:42:43.430049 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6r58p" event={"ID":"9ec7da5d-ed69-46d8-a674-c17f23e4196b","Type":"ContainerDied","Data":"aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440"} Jan 23 18:42:43 crc kubenswrapper[4688]: I0123 18:42:43.934004 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvbdt"] Jan 23 18:42:44 crc kubenswrapper[4688]: I0123 18:42:44.441853 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6r58p" event={"ID":"9ec7da5d-ed69-46d8-a674-c17f23e4196b","Type":"ContainerStarted","Data":"a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02"} Jan 23 18:42:44 crc kubenswrapper[4688]: I0123 18:42:44.441945 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvbdt" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="registry-server" containerID="cri-o://1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b" gracePeriod=2 Jan 23 18:42:44 crc kubenswrapper[4688]: I0123 18:42:44.916008 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.031160 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-utilities\") pod \"362190be-dab9-4717-b199-39e109e48dd5\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.031244 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6dl8\" (UniqueName: \"kubernetes.io/projected/362190be-dab9-4717-b199-39e109e48dd5-kube-api-access-q6dl8\") pod \"362190be-dab9-4717-b199-39e109e48dd5\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.031464 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-catalog-content\") pod \"362190be-dab9-4717-b199-39e109e48dd5\" (UID: \"362190be-dab9-4717-b199-39e109e48dd5\") " Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.031948 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-utilities" (OuterVolumeSpecName: "utilities") pod "362190be-dab9-4717-b199-39e109e48dd5" (UID: "362190be-dab9-4717-b199-39e109e48dd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.032374 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.041537 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/362190be-dab9-4717-b199-39e109e48dd5-kube-api-access-q6dl8" (OuterVolumeSpecName: "kube-api-access-q6dl8") pod "362190be-dab9-4717-b199-39e109e48dd5" (UID: "362190be-dab9-4717-b199-39e109e48dd5"). InnerVolumeSpecName "kube-api-access-q6dl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.134829 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6dl8\" (UniqueName: \"kubernetes.io/projected/362190be-dab9-4717-b199-39e109e48dd5-kube-api-access-q6dl8\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.164291 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "362190be-dab9-4717-b199-39e109e48dd5" (UID: "362190be-dab9-4717-b199-39e109e48dd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.237531 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/362190be-dab9-4717-b199-39e109e48dd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.456842 4688 generic.go:334] "Generic (PLEG): container finished" podID="362190be-dab9-4717-b199-39e109e48dd5" containerID="1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b" exitCode=0 Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.456920 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvbdt" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.456934 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvbdt" event={"ID":"362190be-dab9-4717-b199-39e109e48dd5","Type":"ContainerDied","Data":"1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b"} Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.456972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvbdt" event={"ID":"362190be-dab9-4717-b199-39e109e48dd5","Type":"ContainerDied","Data":"0cb9a154113e30e64e6e0a0fe11e82f76ef2d1ccd1242e348b825ed03d32bb20"} Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.456992 4688 scope.go:117] "RemoveContainer" containerID="1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.461062 4688 generic.go:334] "Generic (PLEG): container finished" podID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerID="a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02" exitCode=0 Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.461105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6r58p" event={"ID":"9ec7da5d-ed69-46d8-a674-c17f23e4196b","Type":"ContainerDied","Data":"a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02"} Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.487759 4688 scope.go:117] "RemoveContainer" containerID="3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.509431 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvbdt"] Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.511450 4688 scope.go:117] "RemoveContainer" containerID="5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.523517 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvbdt"] Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.553283 4688 scope.go:117] "RemoveContainer" containerID="1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b" Jan 23 18:42:45 crc kubenswrapper[4688]: E0123 18:42:45.553838 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b\": container with ID starting with 1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b not found: ID does not exist" containerID="1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.553870 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b"} err="failed to get container status \"1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b\": rpc error: code = NotFound desc = could not find container \"1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b\": container with ID starting with 1bc25c0d556fcbbe9e25ac8c482f97d2b14369e7ffaceb3ce566e152d06c038b not found: ID does not exist" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.553907 4688 scope.go:117] "RemoveContainer" containerID="3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f" Jan 23 18:42:45 crc kubenswrapper[4688]: E0123 18:42:45.554246 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f\": container with ID starting with 3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f not found: ID does not exist" containerID="3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.554278 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f"} err="failed to get container status \"3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f\": rpc error: code = NotFound desc = could not find container \"3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f\": container with ID starting with 3e55b5464a144432b87bd9a72dea51b47bbdcb1db5464657a11041efc1964e7f not found: ID does not exist" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.554298 4688 scope.go:117] "RemoveContainer" containerID="5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda" Jan 23 18:42:45 crc kubenswrapper[4688]: E0123 18:42:45.554552 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda\": container with ID starting with 5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda not found: ID does not exist" containerID="5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda" Jan 23 18:42:45 crc kubenswrapper[4688]: I0123 18:42:45.554582 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda"} err="failed to get container status \"5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda\": rpc error: code = NotFound desc = could not find container \"5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda\": container with ID starting with 5cc7fab047f4cd3d62fe83fe388959b634a30180a69457df00724b6a8c178bda not found: ID does not exist" Jan 23 18:42:46 crc kubenswrapper[4688]: I0123 18:42:46.473022 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6r58p" event={"ID":"9ec7da5d-ed69-46d8-a674-c17f23e4196b","Type":"ContainerStarted","Data":"0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff"} Jan 23 18:42:46 crc kubenswrapper[4688]: I0123 18:42:46.494556 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6r58p" podStartSLOduration=3.035070477 podStartE2EDuration="5.494531934s" podCreationTimestamp="2026-01-23 18:42:41 +0000 UTC" firstStartedPulling="2026-01-23 18:42:43.433470466 +0000 UTC m=+2158.429294907" lastFinishedPulling="2026-01-23 18:42:45.892931923 +0000 UTC m=+2160.888756364" observedRunningTime="2026-01-23 18:42:46.489113548 +0000 UTC m=+2161.484937989" watchObservedRunningTime="2026-01-23 18:42:46.494531934 +0000 UTC m=+2161.490356375" Jan 23 18:42:47 crc kubenswrapper[4688]: I0123 18:42:47.373735 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="362190be-dab9-4717-b199-39e109e48dd5" path="/var/lib/kubelet/pods/362190be-dab9-4717-b199-39e109e48dd5/volumes" Jan 23 18:42:51 crc kubenswrapper[4688]: I0123 18:42:51.698443 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:51 crc kubenswrapper[4688]: I0123 18:42:51.701400 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:51 crc kubenswrapper[4688]: I0123 18:42:51.752077 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:52 crc kubenswrapper[4688]: I0123 18:42:52.579281 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:52 crc kubenswrapper[4688]: I0123 18:42:52.642275 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6r58p"] Jan 23 18:42:54 crc kubenswrapper[4688]: I0123 18:42:54.547494 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6r58p" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="registry-server" containerID="cri-o://0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff" gracePeriod=2 Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.485463 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.560677 4688 generic.go:334] "Generic (PLEG): container finished" podID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerID="0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff" exitCode=0 Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.560758 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6r58p" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.560792 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6r58p" event={"ID":"9ec7da5d-ed69-46d8-a674-c17f23e4196b","Type":"ContainerDied","Data":"0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff"} Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.561478 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6r58p" event={"ID":"9ec7da5d-ed69-46d8-a674-c17f23e4196b","Type":"ContainerDied","Data":"4dcbf8e5c1ef41e504e62c2407cc9613ee5e2286c1b25b340d0b7fbbe23a9ecd"} Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.561539 4688 scope.go:117] "RemoveContainer" containerID="0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.570756 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gw4r\" (UniqueName: \"kubernetes.io/projected/9ec7da5d-ed69-46d8-a674-c17f23e4196b-kube-api-access-8gw4r\") pod \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.571051 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-utilities\") pod \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.571175 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-catalog-content\") pod \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\" (UID: \"9ec7da5d-ed69-46d8-a674-c17f23e4196b\") " Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.572789 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-utilities" (OuterVolumeSpecName: "utilities") pod "9ec7da5d-ed69-46d8-a674-c17f23e4196b" (UID: "9ec7da5d-ed69-46d8-a674-c17f23e4196b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.581513 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec7da5d-ed69-46d8-a674-c17f23e4196b-kube-api-access-8gw4r" (OuterVolumeSpecName: "kube-api-access-8gw4r") pod "9ec7da5d-ed69-46d8-a674-c17f23e4196b" (UID: "9ec7da5d-ed69-46d8-a674-c17f23e4196b"). InnerVolumeSpecName "kube-api-access-8gw4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.596838 4688 scope.go:117] "RemoveContainer" containerID="a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.634062 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ec7da5d-ed69-46d8-a674-c17f23e4196b" (UID: "9ec7da5d-ed69-46d8-a674-c17f23e4196b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.645515 4688 scope.go:117] "RemoveContainer" containerID="aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.673530 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gw4r\" (UniqueName: \"kubernetes.io/projected/9ec7da5d-ed69-46d8-a674-c17f23e4196b-kube-api-access-8gw4r\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.673574 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.673586 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec7da5d-ed69-46d8-a674-c17f23e4196b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.699729 4688 scope.go:117] "RemoveContainer" containerID="0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff" Jan 23 18:42:55 crc kubenswrapper[4688]: E0123 18:42:55.700613 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff\": container with ID starting with 0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff not found: ID does not exist" containerID="0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.700651 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff"} err="failed to get container status \"0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff\": rpc error: code = NotFound desc = could not find container \"0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff\": container with ID starting with 0dc44856de3a81d152754257a2684ac79fada7655c041efd7939570e9c9bf7ff not found: ID does not exist" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.700679 4688 scope.go:117] "RemoveContainer" containerID="a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02" Jan 23 18:42:55 crc kubenswrapper[4688]: E0123 18:42:55.701376 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02\": container with ID starting with a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02 not found: ID does not exist" containerID="a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.701397 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02"} err="failed to get container status \"a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02\": rpc error: code = NotFound desc = could not find container \"a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02\": container with ID starting with a992d7349fbd8458b98fde586bf2797913f594df426cfd572f2d1f7ef6e79a02 not found: ID does not exist" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.701412 4688 scope.go:117] "RemoveContainer" containerID="aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440" Jan 23 18:42:55 crc kubenswrapper[4688]: E0123 18:42:55.701681 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440\": container with ID starting with aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440 not found: ID does not exist" containerID="aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.701700 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440"} err="failed to get container status \"aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440\": rpc error: code = NotFound desc = could not find container \"aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440\": container with ID starting with aff68329598dc34e97c95f26e1de310a8c014fa24c299e75a1f98e3944d00440 not found: ID does not exist" Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.896620 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6r58p"] Jan 23 18:42:55 crc kubenswrapper[4688]: I0123 18:42:55.909352 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6r58p"] Jan 23 18:42:56 crc kubenswrapper[4688]: I0123 18:42:56.570707 4688 generic.go:334] "Generic (PLEG): container finished" podID="45576589-fbbb-4556-9306-de4deba76388" containerID="78afd1be25a350bc1806ce8232fe94de25a1f3f21620863d3403af8a36039d11" exitCode=0 Jan 23 18:42:56 crc kubenswrapper[4688]: I0123 18:42:56.570774 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" event={"ID":"45576589-fbbb-4556-9306-de4deba76388","Type":"ContainerDied","Data":"78afd1be25a350bc1806ce8232fe94de25a1f3f21620863d3403af8a36039d11"} Jan 23 18:42:57 crc kubenswrapper[4688]: I0123 18:42:57.375816 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" path="/var/lib/kubelet/pods/9ec7da5d-ed69-46d8-a674-c17f23e4196b/volumes" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.016072 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.120523 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-ssh-key-openstack-edpm-ipam\") pod \"45576589-fbbb-4556-9306-de4deba76388\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.120636 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w27zc\" (UniqueName: \"kubernetes.io/projected/45576589-fbbb-4556-9306-de4deba76388-kube-api-access-w27zc\") pod \"45576589-fbbb-4556-9306-de4deba76388\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.120839 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-inventory\") pod \"45576589-fbbb-4556-9306-de4deba76388\" (UID: \"45576589-fbbb-4556-9306-de4deba76388\") " Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.127849 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45576589-fbbb-4556-9306-de4deba76388-kube-api-access-w27zc" (OuterVolumeSpecName: "kube-api-access-w27zc") pod "45576589-fbbb-4556-9306-de4deba76388" (UID: "45576589-fbbb-4556-9306-de4deba76388"). InnerVolumeSpecName "kube-api-access-w27zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.152148 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "45576589-fbbb-4556-9306-de4deba76388" (UID: "45576589-fbbb-4556-9306-de4deba76388"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.152534 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-inventory" (OuterVolumeSpecName: "inventory") pod "45576589-fbbb-4556-9306-de4deba76388" (UID: "45576589-fbbb-4556-9306-de4deba76388"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.223257 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.223296 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w27zc\" (UniqueName: \"kubernetes.io/projected/45576589-fbbb-4556-9306-de4deba76388-kube-api-access-w27zc\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.223306 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45576589-fbbb-4556-9306-de4deba76388-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.592385 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" event={"ID":"45576589-fbbb-4556-9306-de4deba76388","Type":"ContainerDied","Data":"523718322e868a4515e9e2b4cab0a30689ea48e053e1470678cfe6de0a98fa23"} Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.592429 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.592439 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="523718322e868a4515e9e2b4cab0a30689ea48e053e1470678cfe6de0a98fa23" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.687604 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5kb66"] Jan 23 18:42:58 crc kubenswrapper[4688]: E0123 18:42:58.688165 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="extract-utilities" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688213 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="extract-utilities" Jan 23 18:42:58 crc kubenswrapper[4688]: E0123 18:42:58.688238 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45576589-fbbb-4556-9306-de4deba76388" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688249 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45576589-fbbb-4556-9306-de4deba76388" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:58 crc kubenswrapper[4688]: E0123 18:42:58.688268 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="extract-content" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688275 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="extract-content" Jan 23 18:42:58 crc kubenswrapper[4688]: E0123 18:42:58.688292 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="extract-utilities" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688298 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="extract-utilities" Jan 23 18:42:58 crc kubenswrapper[4688]: E0123 18:42:58.688311 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="extract-content" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688331 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="extract-content" Jan 23 18:42:58 crc kubenswrapper[4688]: E0123 18:42:58.688399 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="registry-server" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688409 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="registry-server" Jan 23 18:42:58 crc kubenswrapper[4688]: E0123 18:42:58.688419 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="registry-server" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688427 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="registry-server" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688620 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="45576589-fbbb-4556-9306-de4deba76388" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688632 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="362190be-dab9-4717-b199-39e109e48dd5" containerName="registry-server" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.688655 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec7da5d-ed69-46d8-a674-c17f23e4196b" containerName="registry-server" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.689414 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.691891 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.692583 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.693810 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.695325 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.697308 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5kb66"] Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.837686 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.838094 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxz4\" (UniqueName: \"kubernetes.io/projected/45add2ba-c382-4807-8995-43514182b85a-kube-api-access-dsxz4\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.838128 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.940263 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxz4\" (UniqueName: \"kubernetes.io/projected/45add2ba-c382-4807-8995-43514182b85a-kube-api-access-dsxz4\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.940335 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.940454 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.944763 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.951700 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:58 crc kubenswrapper[4688]: I0123 18:42:58.959076 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxz4\" (UniqueName: \"kubernetes.io/projected/45add2ba-c382-4807-8995-43514182b85a-kube-api-access-dsxz4\") pod \"ssh-known-hosts-edpm-deployment-5kb66\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:59 crc kubenswrapper[4688]: I0123 18:42:59.010148 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:42:59 crc kubenswrapper[4688]: I0123 18:42:59.519885 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5kb66"] Jan 23 18:42:59 crc kubenswrapper[4688]: I0123 18:42:59.603858 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" event={"ID":"45add2ba-c382-4807-8995-43514182b85a","Type":"ContainerStarted","Data":"838ca6e3e28bdf027b54f0f59395261aee5a5a74ce694ebcd43d90549f0f0164"} Jan 23 18:43:00 crc kubenswrapper[4688]: I0123 18:43:00.613899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" event={"ID":"45add2ba-c382-4807-8995-43514182b85a","Type":"ContainerStarted","Data":"f4b63b99aca13f5f676aa8232a122f23386ee100852aeb07863360aa10db4b0a"} Jan 23 18:43:00 crc kubenswrapper[4688]: I0123 18:43:00.637836 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" podStartSLOduration=2.087703869 podStartE2EDuration="2.637817612s" podCreationTimestamp="2026-01-23 18:42:58 +0000 UTC" firstStartedPulling="2026-01-23 18:42:59.525967393 +0000 UTC m=+2174.521791834" lastFinishedPulling="2026-01-23 18:43:00.076081146 +0000 UTC m=+2175.071905577" observedRunningTime="2026-01-23 18:43:00.629956167 +0000 UTC m=+2175.625780608" watchObservedRunningTime="2026-01-23 18:43:00.637817612 +0000 UTC m=+2175.633642053" Jan 23 18:43:06 crc kubenswrapper[4688]: I0123 18:43:06.965558 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:43:06 crc kubenswrapper[4688]: I0123 18:43:06.966206 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:43:08 crc kubenswrapper[4688]: I0123 18:43:08.694892 4688 generic.go:334] "Generic (PLEG): container finished" podID="45add2ba-c382-4807-8995-43514182b85a" containerID="f4b63b99aca13f5f676aa8232a122f23386ee100852aeb07863360aa10db4b0a" exitCode=0 Jan 23 18:43:08 crc kubenswrapper[4688]: I0123 18:43:08.695746 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" event={"ID":"45add2ba-c382-4807-8995-43514182b85a","Type":"ContainerDied","Data":"f4b63b99aca13f5f676aa8232a122f23386ee100852aeb07863360aa10db4b0a"} Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.094463 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.272520 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-ssh-key-openstack-edpm-ipam\") pod \"45add2ba-c382-4807-8995-43514182b85a\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.272570 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-inventory-0\") pod \"45add2ba-c382-4807-8995-43514182b85a\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.272642 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsxz4\" (UniqueName: \"kubernetes.io/projected/45add2ba-c382-4807-8995-43514182b85a-kube-api-access-dsxz4\") pod \"45add2ba-c382-4807-8995-43514182b85a\" (UID: \"45add2ba-c382-4807-8995-43514182b85a\") " Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.280623 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45add2ba-c382-4807-8995-43514182b85a-kube-api-access-dsxz4" (OuterVolumeSpecName: "kube-api-access-dsxz4") pod "45add2ba-c382-4807-8995-43514182b85a" (UID: "45add2ba-c382-4807-8995-43514182b85a"). InnerVolumeSpecName "kube-api-access-dsxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.306233 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "45add2ba-c382-4807-8995-43514182b85a" (UID: "45add2ba-c382-4807-8995-43514182b85a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.307776 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "45add2ba-c382-4807-8995-43514182b85a" (UID: "45add2ba-c382-4807-8995-43514182b85a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.375563 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.375601 4688 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/45add2ba-c382-4807-8995-43514182b85a-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.375631 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsxz4\" (UniqueName: \"kubernetes.io/projected/45add2ba-c382-4807-8995-43514182b85a-kube-api-access-dsxz4\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.718015 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" event={"ID":"45add2ba-c382-4807-8995-43514182b85a","Type":"ContainerDied","Data":"838ca6e3e28bdf027b54f0f59395261aee5a5a74ce694ebcd43d90549f0f0164"} Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.718058 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="838ca6e3e28bdf027b54f0f59395261aee5a5a74ce694ebcd43d90549f0f0164" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.718075 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5kb66" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.802993 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm"] Jan 23 18:43:10 crc kubenswrapper[4688]: E0123 18:43:10.803580 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45add2ba-c382-4807-8995-43514182b85a" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.803600 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="45add2ba-c382-4807-8995-43514182b85a" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.803793 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="45add2ba-c382-4807-8995-43514182b85a" containerName="ssh-known-hosts-edpm-deployment" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.804554 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.806839 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.806894 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.807228 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.822963 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.825642 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm"] Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.989222 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfdq7\" (UniqueName: \"kubernetes.io/projected/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-kube-api-access-kfdq7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.989528 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:10 crc kubenswrapper[4688]: I0123 18:43:10.989694 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.090552 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.090614 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfdq7\" (UniqueName: \"kubernetes.io/projected/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-kube-api-access-kfdq7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.090787 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.095809 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.096690 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.107980 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfdq7\" (UniqueName: \"kubernetes.io/projected/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-kube-api-access-kfdq7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-f6tdm\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.122238 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.672023 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm"] Jan 23 18:43:11 crc kubenswrapper[4688]: I0123 18:43:11.729368 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" event={"ID":"90a8ac5e-520d-44bd-a129-ce6b0c0f2786","Type":"ContainerStarted","Data":"06d581e2060da033c966fce352132ca4328d5bc9445e35f1d6d1e7236e1ab26a"} Jan 23 18:43:12 crc kubenswrapper[4688]: I0123 18:43:12.739628 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" event={"ID":"90a8ac5e-520d-44bd-a129-ce6b0c0f2786","Type":"ContainerStarted","Data":"e7f155daae36648df4b0de473c6eef103544a9fa97442ce2419ba70e812304ed"} Jan 23 18:43:12 crc kubenswrapper[4688]: I0123 18:43:12.765075 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" podStartSLOduration=2.311494924 podStartE2EDuration="2.765049564s" podCreationTimestamp="2026-01-23 18:43:10 +0000 UTC" firstStartedPulling="2026-01-23 18:43:11.67599391 +0000 UTC m=+2186.671818341" lastFinishedPulling="2026-01-23 18:43:12.12954855 +0000 UTC m=+2187.125372981" observedRunningTime="2026-01-23 18:43:12.757729864 +0000 UTC m=+2187.753554305" watchObservedRunningTime="2026-01-23 18:43:12.765049564 +0000 UTC m=+2187.760874005" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.450567 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zzvjx"] Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.455729 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.459725 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-utilities\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.459814 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-catalog-content\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.460154 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvr8t\" (UniqueName: \"kubernetes.io/projected/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-kube-api-access-qvr8t\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.461532 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzvjx"] Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.562819 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvr8t\" (UniqueName: \"kubernetes.io/projected/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-kube-api-access-qvr8t\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.563785 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-utilities\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.564708 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-utilities\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.564887 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-catalog-content\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.565309 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-catalog-content\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.590586 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvr8t\" (UniqueName: \"kubernetes.io/projected/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-kube-api-access-qvr8t\") pod \"community-operators-zzvjx\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:14 crc kubenswrapper[4688]: I0123 18:43:14.776745 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:15 crc kubenswrapper[4688]: I0123 18:43:15.378372 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzvjx"] Jan 23 18:43:15 crc kubenswrapper[4688]: I0123 18:43:15.767646 4688 generic.go:334] "Generic (PLEG): container finished" podID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerID="12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b" exitCode=0 Jan 23 18:43:15 crc kubenswrapper[4688]: I0123 18:43:15.767859 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzvjx" event={"ID":"9d63d4aa-d378-436c-bfa4-b7906c84b6cf","Type":"ContainerDied","Data":"12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b"} Jan 23 18:43:15 crc kubenswrapper[4688]: I0123 18:43:15.767944 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzvjx" event={"ID":"9d63d4aa-d378-436c-bfa4-b7906c84b6cf","Type":"ContainerStarted","Data":"11b43633b855d5048c2c0592d57ed840db855405a230694c73d3751068f403b0"} Jan 23 18:43:16 crc kubenswrapper[4688]: I0123 18:43:16.777980 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzvjx" event={"ID":"9d63d4aa-d378-436c-bfa4-b7906c84b6cf","Type":"ContainerStarted","Data":"95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7"} Jan 23 18:43:17 crc kubenswrapper[4688]: I0123 18:43:17.795570 4688 generic.go:334] "Generic (PLEG): container finished" podID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerID="95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7" exitCode=0 Jan 23 18:43:17 crc kubenswrapper[4688]: I0123 18:43:17.795629 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzvjx" event={"ID":"9d63d4aa-d378-436c-bfa4-b7906c84b6cf","Type":"ContainerDied","Data":"95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7"} Jan 23 18:43:18 crc kubenswrapper[4688]: I0123 18:43:18.806321 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzvjx" event={"ID":"9d63d4aa-d378-436c-bfa4-b7906c84b6cf","Type":"ContainerStarted","Data":"47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d"} Jan 23 18:43:18 crc kubenswrapper[4688]: I0123 18:43:18.844480 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zzvjx" podStartSLOduration=2.267634258 podStartE2EDuration="4.844448864s" podCreationTimestamp="2026-01-23 18:43:14 +0000 UTC" firstStartedPulling="2026-01-23 18:43:15.769619151 +0000 UTC m=+2190.765443592" lastFinishedPulling="2026-01-23 18:43:18.346433757 +0000 UTC m=+2193.342258198" observedRunningTime="2026-01-23 18:43:18.82723615 +0000 UTC m=+2193.823060631" watchObservedRunningTime="2026-01-23 18:43:18.844448864 +0000 UTC m=+2193.840273335" Jan 23 18:43:21 crc kubenswrapper[4688]: I0123 18:43:21.834752 4688 generic.go:334] "Generic (PLEG): container finished" podID="90a8ac5e-520d-44bd-a129-ce6b0c0f2786" containerID="e7f155daae36648df4b0de473c6eef103544a9fa97442ce2419ba70e812304ed" exitCode=0 Jan 23 18:43:21 crc kubenswrapper[4688]: I0123 18:43:21.834881 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" event={"ID":"90a8ac5e-520d-44bd-a129-ce6b0c0f2786","Type":"ContainerDied","Data":"e7f155daae36648df4b0de473c6eef103544a9fa97442ce2419ba70e812304ed"} Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.458111 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.564990 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-inventory\") pod \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.565225 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-ssh-key-openstack-edpm-ipam\") pod \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.565424 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfdq7\" (UniqueName: \"kubernetes.io/projected/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-kube-api-access-kfdq7\") pod \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\" (UID: \"90a8ac5e-520d-44bd-a129-ce6b0c0f2786\") " Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.574039 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-kube-api-access-kfdq7" (OuterVolumeSpecName: "kube-api-access-kfdq7") pod "90a8ac5e-520d-44bd-a129-ce6b0c0f2786" (UID: "90a8ac5e-520d-44bd-a129-ce6b0c0f2786"). InnerVolumeSpecName "kube-api-access-kfdq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.600464 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90a8ac5e-520d-44bd-a129-ce6b0c0f2786" (UID: "90a8ac5e-520d-44bd-a129-ce6b0c0f2786"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.609692 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-inventory" (OuterVolumeSpecName: "inventory") pod "90a8ac5e-520d-44bd-a129-ce6b0c0f2786" (UID: "90a8ac5e-520d-44bd-a129-ce6b0c0f2786"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.667245 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfdq7\" (UniqueName: \"kubernetes.io/projected/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-kube-api-access-kfdq7\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.667280 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.667292 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90a8ac5e-520d-44bd-a129-ce6b0c0f2786-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.777133 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.777259 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.848699 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.863921 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.863923 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-f6tdm" event={"ID":"90a8ac5e-520d-44bd-a129-ce6b0c0f2786","Type":"ContainerDied","Data":"06d581e2060da033c966fce352132ca4328d5bc9445e35f1d6d1e7236e1ab26a"} Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.863979 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d581e2060da033c966fce352132ca4328d5bc9445e35f1d6d1e7236e1ab26a" Jan 23 18:43:24 crc kubenswrapper[4688]: I0123 18:43:24.919443 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.091679 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzvjx"] Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.563437 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd"] Jan 23 18:43:25 crc kubenswrapper[4688]: E0123 18:43:25.565736 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a8ac5e-520d-44bd-a129-ce6b0c0f2786" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.565775 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a8ac5e-520d-44bd-a129-ce6b0c0f2786" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.566171 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a8ac5e-520d-44bd-a129-ce6b0c0f2786" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.567093 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.570015 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.570370 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.570765 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.570968 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.592289 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd"] Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.597518 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gdg\" (UniqueName: \"kubernetes.io/projected/7506d9ea-fa02-4f06-b654-bb7857357a6f-kube-api-access-q9gdg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.597922 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.598056 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.699060 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gdg\" (UniqueName: \"kubernetes.io/projected/7506d9ea-fa02-4f06-b654-bb7857357a6f-kube-api-access-q9gdg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.699313 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.699370 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.705402 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.706398 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.719857 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gdg\" (UniqueName: \"kubernetes.io/projected/7506d9ea-fa02-4f06-b654-bb7857357a6f-kube-api-access-q9gdg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:25 crc kubenswrapper[4688]: I0123 18:43:25.899118 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:26 crc kubenswrapper[4688]: I0123 18:43:26.467983 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd"] Jan 23 18:43:26 crc kubenswrapper[4688]: I0123 18:43:26.881105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" event={"ID":"7506d9ea-fa02-4f06-b654-bb7857357a6f","Type":"ContainerStarted","Data":"a866e36590d57fdc0ff7b7ea90d821c127c975827e07716bfc9424f88a62b41a"} Jan 23 18:43:26 crc kubenswrapper[4688]: I0123 18:43:26.881364 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zzvjx" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="registry-server" containerID="cri-o://47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d" gracePeriod=2 Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.349253 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.538473 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-catalog-content\") pod \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.538949 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-utilities\") pod \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.539114 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvr8t\" (UniqueName: \"kubernetes.io/projected/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-kube-api-access-qvr8t\") pod \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\" (UID: \"9d63d4aa-d378-436c-bfa4-b7906c84b6cf\") " Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.539821 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-utilities" (OuterVolumeSpecName: "utilities") pod "9d63d4aa-d378-436c-bfa4-b7906c84b6cf" (UID: "9d63d4aa-d378-436c-bfa4-b7906c84b6cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.546011 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-kube-api-access-qvr8t" (OuterVolumeSpecName: "kube-api-access-qvr8t") pod "9d63d4aa-d378-436c-bfa4-b7906c84b6cf" (UID: "9d63d4aa-d378-436c-bfa4-b7906c84b6cf"). InnerVolumeSpecName "kube-api-access-qvr8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.641794 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.641845 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvr8t\" (UniqueName: \"kubernetes.io/projected/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-kube-api-access-qvr8t\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.895808 4688 generic.go:334] "Generic (PLEG): container finished" podID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerID="47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d" exitCode=0 Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.895937 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzvjx" event={"ID":"9d63d4aa-d378-436c-bfa4-b7906c84b6cf","Type":"ContainerDied","Data":"47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d"} Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.895992 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzvjx" event={"ID":"9d63d4aa-d378-436c-bfa4-b7906c84b6cf","Type":"ContainerDied","Data":"11b43633b855d5048c2c0592d57ed840db855405a230694c73d3751068f403b0"} Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.896066 4688 scope.go:117] "RemoveContainer" containerID="47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.896359 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzvjx" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.902492 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" event={"ID":"7506d9ea-fa02-4f06-b654-bb7857357a6f","Type":"ContainerStarted","Data":"72e53a3e9c4d588202674336ba16b8351e41f9e54c32d5cdf6b82f72ea6c2789"} Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.921123 4688 scope.go:117] "RemoveContainer" containerID="95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.937219 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" podStartSLOduration=2.476051504 podStartE2EDuration="2.937199022s" podCreationTimestamp="2026-01-23 18:43:25 +0000 UTC" firstStartedPulling="2026-01-23 18:43:26.475742856 +0000 UTC m=+2201.471567297" lastFinishedPulling="2026-01-23 18:43:26.936890374 +0000 UTC m=+2201.932714815" observedRunningTime="2026-01-23 18:43:27.934786152 +0000 UTC m=+2202.930610593" watchObservedRunningTime="2026-01-23 18:43:27.937199022 +0000 UTC m=+2202.933023463" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.943134 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d63d4aa-d378-436c-bfa4-b7906c84b6cf" (UID: "9d63d4aa-d378-436c-bfa4-b7906c84b6cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.949169 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d63d4aa-d378-436c-bfa4-b7906c84b6cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.954061 4688 scope.go:117] "RemoveContainer" containerID="12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.975635 4688 scope.go:117] "RemoveContainer" containerID="47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d" Jan 23 18:43:27 crc kubenswrapper[4688]: E0123 18:43:27.976125 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d\": container with ID starting with 47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d not found: ID does not exist" containerID="47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.976168 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d"} err="failed to get container status \"47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d\": rpc error: code = NotFound desc = could not find container \"47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d\": container with ID starting with 47c8e6ce9a3e051613f3c5d71a3e418c212d5733249f21a34cf20f84d67b007d not found: ID does not exist" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.976523 4688 scope.go:117] "RemoveContainer" containerID="95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7" Jan 23 18:43:27 crc kubenswrapper[4688]: E0123 18:43:27.976899 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7\": container with ID starting with 95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7 not found: ID does not exist" containerID="95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.976938 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7"} err="failed to get container status \"95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7\": rpc error: code = NotFound desc = could not find container \"95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7\": container with ID starting with 95746c00bc600769686fadb3ec7c0043327bc157d51005d0803516d90b317ea7 not found: ID does not exist" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.976962 4688 scope.go:117] "RemoveContainer" containerID="12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b" Jan 23 18:43:27 crc kubenswrapper[4688]: E0123 18:43:27.977253 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b\": container with ID starting with 12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b not found: ID does not exist" containerID="12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b" Jan 23 18:43:27 crc kubenswrapper[4688]: I0123 18:43:27.977308 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b"} err="failed to get container status \"12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b\": rpc error: code = NotFound desc = could not find container \"12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b\": container with ID starting with 12c746924deaad5273ba2a480e8c2eb3121fb7ba7c2961bc1f74588bd68f936b not found: ID does not exist" Jan 23 18:43:28 crc kubenswrapper[4688]: I0123 18:43:28.237572 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzvjx"] Jan 23 18:43:28 crc kubenswrapper[4688]: I0123 18:43:28.245374 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zzvjx"] Jan 23 18:43:29 crc kubenswrapper[4688]: I0123 18:43:29.370417 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" path="/var/lib/kubelet/pods/9d63d4aa-d378-436c-bfa4-b7906c84b6cf/volumes" Jan 23 18:43:36 crc kubenswrapper[4688]: I0123 18:43:36.964991 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:43:36 crc kubenswrapper[4688]: I0123 18:43:36.965540 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:43:36 crc kubenswrapper[4688]: I0123 18:43:36.965587 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:43:36 crc kubenswrapper[4688]: I0123 18:43:36.966543 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c3fcf621a6c46a20d9b2fec75c482f369fa6bd4f6d78fbde617289edb9547a1"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:43:36 crc kubenswrapper[4688]: I0123 18:43:36.966607 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://0c3fcf621a6c46a20d9b2fec75c482f369fa6bd4f6d78fbde617289edb9547a1" gracePeriod=600 Jan 23 18:43:37 crc kubenswrapper[4688]: I0123 18:43:37.997277 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="0c3fcf621a6c46a20d9b2fec75c482f369fa6bd4f6d78fbde617289edb9547a1" exitCode=0 Jan 23 18:43:37 crc kubenswrapper[4688]: I0123 18:43:37.997358 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"0c3fcf621a6c46a20d9b2fec75c482f369fa6bd4f6d78fbde617289edb9547a1"} Jan 23 18:43:37 crc kubenswrapper[4688]: I0123 18:43:37.997906 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9"} Jan 23 18:43:37 crc kubenswrapper[4688]: I0123 18:43:37.997933 4688 scope.go:117] "RemoveContainer" containerID="21d6d44c522ce19d68fc2f9a6c1c40de16c3f0611e7b59f5ea8288c6dcb98c58" Jan 23 18:43:38 crc kubenswrapper[4688]: I0123 18:43:38.000019 4688 generic.go:334] "Generic (PLEG): container finished" podID="7506d9ea-fa02-4f06-b654-bb7857357a6f" containerID="72e53a3e9c4d588202674336ba16b8351e41f9e54c32d5cdf6b82f72ea6c2789" exitCode=0 Jan 23 18:43:38 crc kubenswrapper[4688]: I0123 18:43:38.000053 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" event={"ID":"7506d9ea-fa02-4f06-b654-bb7857357a6f","Type":"ContainerDied","Data":"72e53a3e9c4d588202674336ba16b8351e41f9e54c32d5cdf6b82f72ea6c2789"} Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.446827 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.624214 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-ssh-key-openstack-edpm-ipam\") pod \"7506d9ea-fa02-4f06-b654-bb7857357a6f\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.624353 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-inventory\") pod \"7506d9ea-fa02-4f06-b654-bb7857357a6f\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.624663 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9gdg\" (UniqueName: \"kubernetes.io/projected/7506d9ea-fa02-4f06-b654-bb7857357a6f-kube-api-access-q9gdg\") pod \"7506d9ea-fa02-4f06-b654-bb7857357a6f\" (UID: \"7506d9ea-fa02-4f06-b654-bb7857357a6f\") " Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.631610 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7506d9ea-fa02-4f06-b654-bb7857357a6f-kube-api-access-q9gdg" (OuterVolumeSpecName: "kube-api-access-q9gdg") pod "7506d9ea-fa02-4f06-b654-bb7857357a6f" (UID: "7506d9ea-fa02-4f06-b654-bb7857357a6f"). InnerVolumeSpecName "kube-api-access-q9gdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.669861 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-inventory" (OuterVolumeSpecName: "inventory") pod "7506d9ea-fa02-4f06-b654-bb7857357a6f" (UID: "7506d9ea-fa02-4f06-b654-bb7857357a6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.673795 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7506d9ea-fa02-4f06-b654-bb7857357a6f" (UID: "7506d9ea-fa02-4f06-b654-bb7857357a6f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.728508 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9gdg\" (UniqueName: \"kubernetes.io/projected/7506d9ea-fa02-4f06-b654-bb7857357a6f-kube-api-access-q9gdg\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.728582 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:39 crc kubenswrapper[4688]: I0123 18:43:39.728599 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7506d9ea-fa02-4f06-b654-bb7857357a6f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.027835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" event={"ID":"7506d9ea-fa02-4f06-b654-bb7857357a6f","Type":"ContainerDied","Data":"a866e36590d57fdc0ff7b7ea90d821c127c975827e07716bfc9424f88a62b41a"} Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.027892 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a866e36590d57fdc0ff7b7ea90d821c127c975827e07716bfc9424f88a62b41a" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.027913 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.124392 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5"] Jan 23 18:43:40 crc kubenswrapper[4688]: E0123 18:43:40.125494 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="extract-utilities" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.125605 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="extract-utilities" Jan 23 18:43:40 crc kubenswrapper[4688]: E0123 18:43:40.125702 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="extract-content" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.125777 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="extract-content" Jan 23 18:43:40 crc kubenswrapper[4688]: E0123 18:43:40.125891 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="registry-server" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.125963 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="registry-server" Jan 23 18:43:40 crc kubenswrapper[4688]: E0123 18:43:40.126039 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7506d9ea-fa02-4f06-b654-bb7857357a6f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.126104 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="7506d9ea-fa02-4f06-b654-bb7857357a6f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.126426 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d63d4aa-d378-436c-bfa4-b7906c84b6cf" containerName="registry-server" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.126517 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="7506d9ea-fa02-4f06-b654-bb7857357a6f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.128212 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.131608 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.131741 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.132015 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.132171 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.132402 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.132178 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.132850 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.134967 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137176 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137351 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137417 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137480 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137505 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137562 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137683 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137838 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.137929 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.138009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.138120 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz4lw\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-kube-api-access-hz4lw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.138227 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.138317 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.138436 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.145554 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5"] Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240359 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240506 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240557 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240625 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240663 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240724 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240808 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240902 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240941 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.240983 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.241017 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz4lw\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-kube-api-access-hz4lw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.241056 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.241104 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.241213 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.246400 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.246512 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.247343 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.247525 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.249133 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.249790 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.250221 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.250634 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.252104 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.252426 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.252443 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.253882 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.254867 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.265939 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz4lw\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-kube-api-access-hz4lw\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:40 crc kubenswrapper[4688]: I0123 18:43:40.446051 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:43:41 crc kubenswrapper[4688]: I0123 18:43:41.003593 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5"] Jan 23 18:43:41 crc kubenswrapper[4688]: W0123 18:43:41.010653 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cb10503_bf60_4049_a2b0_7299899692b0.slice/crio-955a18c0b0f3828e24081fe29375afbe73574307bb0db187db3635b8520d776f WatchSource:0}: Error finding container 955a18c0b0f3828e24081fe29375afbe73574307bb0db187db3635b8520d776f: Status 404 returned error can't find the container with id 955a18c0b0f3828e24081fe29375afbe73574307bb0db187db3635b8520d776f Jan 23 18:43:41 crc kubenswrapper[4688]: I0123 18:43:41.041402 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" event={"ID":"2cb10503-bf60-4049-a2b0-7299899692b0","Type":"ContainerStarted","Data":"955a18c0b0f3828e24081fe29375afbe73574307bb0db187db3635b8520d776f"} Jan 23 18:43:42 crc kubenswrapper[4688]: I0123 18:43:42.053741 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" event={"ID":"2cb10503-bf60-4049-a2b0-7299899692b0","Type":"ContainerStarted","Data":"d58be97363e438e6b8bf459523faf2393e254fa84a3d2ade669025444ca70a99"} Jan 23 18:44:24 crc kubenswrapper[4688]: I0123 18:44:24.503644 4688 generic.go:334] "Generic (PLEG): container finished" podID="2cb10503-bf60-4049-a2b0-7299899692b0" containerID="d58be97363e438e6b8bf459523faf2393e254fa84a3d2ade669025444ca70a99" exitCode=0 Jan 23 18:44:24 crc kubenswrapper[4688]: I0123 18:44:24.503748 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" event={"ID":"2cb10503-bf60-4049-a2b0-7299899692b0","Type":"ContainerDied","Data":"d58be97363e438e6b8bf459523faf2393e254fa84a3d2ade669025444ca70a99"} Jan 23 18:44:25 crc kubenswrapper[4688]: I0123 18:44:25.891881 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.055825 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-repo-setup-combined-ca-bundle\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.055951 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056061 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-telemetry-combined-ca-bundle\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056140 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056327 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ssh-key-openstack-edpm-ipam\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056389 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-neutron-metadata-combined-ca-bundle\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056492 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056652 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ovn-combined-ca-bundle\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056730 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-libvirt-combined-ca-bundle\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056759 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-bootstrap-combined-ca-bundle\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056820 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-nova-combined-ca-bundle\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056866 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz4lw\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-kube-api-access-hz4lw\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056933 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-inventory\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.056983 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2cb10503-bf60-4049-a2b0-7299899692b0\" (UID: \"2cb10503-bf60-4049-a2b0-7299899692b0\") " Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.064348 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.064797 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.065626 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.066776 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.066854 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.067007 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.068926 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.068930 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.068995 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.069353 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.074129 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-kube-api-access-hz4lw" (OuterVolumeSpecName: "kube-api-access-hz4lw") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "kube-api-access-hz4lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.076091 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.102059 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-inventory" (OuterVolumeSpecName: "inventory") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.113045 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cb10503-bf60-4049-a2b0-7299899692b0" (UID: "2cb10503-bf60-4049-a2b0-7299899692b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160732 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160775 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160791 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160803 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160815 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160824 4688 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160833 4688 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160842 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz4lw\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-kube-api-access-hz4lw\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160854 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160863 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160873 4688 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160882 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160891 4688 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cb10503-bf60-4049-a2b0-7299899692b0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.160901 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2cb10503-bf60-4049-a2b0-7299899692b0-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.524329 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" event={"ID":"2cb10503-bf60-4049-a2b0-7299899692b0","Type":"ContainerDied","Data":"955a18c0b0f3828e24081fe29375afbe73574307bb0db187db3635b8520d776f"} Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.524370 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955a18c0b0f3828e24081fe29375afbe73574307bb0db187db3635b8520d776f" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.524395 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.615715 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf"] Jan 23 18:44:26 crc kubenswrapper[4688]: E0123 18:44:26.616116 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb10503-bf60-4049-a2b0-7299899692b0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.616133 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb10503-bf60-4049-a2b0-7299899692b0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.616324 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb10503-bf60-4049-a2b0-7299899692b0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.617042 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.621305 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.621400 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.623094 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.623326 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.623536 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.633379 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf"] Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.772877 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.772993 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2622f843-d555-43e1-b359-b490aab07eb2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.773015 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.773036 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv9xj\" (UniqueName: \"kubernetes.io/projected/2622f843-d555-43e1-b359-b490aab07eb2-kube-api-access-bv9xj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.773115 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.875097 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.875249 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.875323 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2622f843-d555-43e1-b359-b490aab07eb2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.875349 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.875371 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv9xj\" (UniqueName: \"kubernetes.io/projected/2622f843-d555-43e1-b359-b490aab07eb2-kube-api-access-bv9xj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.876381 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2622f843-d555-43e1-b359-b490aab07eb2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.878722 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.879805 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.880119 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.898003 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv9xj\" (UniqueName: \"kubernetes.io/projected/2622f843-d555-43e1-b359-b490aab07eb2-kube-api-access-bv9xj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-288sf\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:26 crc kubenswrapper[4688]: I0123 18:44:26.950542 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:44:27 crc kubenswrapper[4688]: I0123 18:44:27.523800 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf"] Jan 23 18:44:27 crc kubenswrapper[4688]: I0123 18:44:27.536175 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" event={"ID":"2622f843-d555-43e1-b359-b490aab07eb2","Type":"ContainerStarted","Data":"ceb2e03082e38bbf3a93b91a716d6cd25e9a655f6bdb273bae5203d268f4258b"} Jan 23 18:44:28 crc kubenswrapper[4688]: I0123 18:44:28.545996 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" event={"ID":"2622f843-d555-43e1-b359-b490aab07eb2","Type":"ContainerStarted","Data":"04bfdf569e14285aec476bf8d448a7bc0b39d130a61ceaa52e0aeff8c0f465f5"} Jan 23 18:44:28 crc kubenswrapper[4688]: I0123 18:44:28.576157 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" podStartSLOduration=2.083777268 podStartE2EDuration="2.576132272s" podCreationTimestamp="2026-01-23 18:44:26 +0000 UTC" firstStartedPulling="2026-01-23 18:44:27.526220151 +0000 UTC m=+2262.522044592" lastFinishedPulling="2026-01-23 18:44:28.018575165 +0000 UTC m=+2263.014399596" observedRunningTime="2026-01-23 18:44:28.562742518 +0000 UTC m=+2263.558566969" watchObservedRunningTime="2026-01-23 18:44:28.576132272 +0000 UTC m=+2263.571956723" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.146770 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279"] Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.149325 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.152699 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.152792 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.166992 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279"] Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.218240 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfe9b0f-a662-4411-89cb-a14697aceaab-secret-volume\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.218448 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfe9b0f-a662-4411-89cb-a14697aceaab-config-volume\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.218785 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjg9c\" (UniqueName: \"kubernetes.io/projected/fdfe9b0f-a662-4411-89cb-a14697aceaab-kube-api-access-kjg9c\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.320962 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjg9c\" (UniqueName: \"kubernetes.io/projected/fdfe9b0f-a662-4411-89cb-a14697aceaab-kube-api-access-kjg9c\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.321048 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfe9b0f-a662-4411-89cb-a14697aceaab-secret-volume\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.321119 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfe9b0f-a662-4411-89cb-a14697aceaab-config-volume\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.322092 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfe9b0f-a662-4411-89cb-a14697aceaab-config-volume\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.332243 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfe9b0f-a662-4411-89cb-a14697aceaab-secret-volume\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.347727 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjg9c\" (UniqueName: \"kubernetes.io/projected/fdfe9b0f-a662-4411-89cb-a14697aceaab-kube-api-access-kjg9c\") pod \"collect-profiles-29486565-sf279\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:00 crc kubenswrapper[4688]: I0123 18:45:00.617867 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:01 crc kubenswrapper[4688]: I0123 18:45:01.117923 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279"] Jan 23 18:45:01 crc kubenswrapper[4688]: W0123 18:45:01.134409 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdfe9b0f_a662_4411_89cb_a14697aceaab.slice/crio-83a53a4e3dc9931ecc25adf46251d01a25e7401eb74794540442b4075f4301e6 WatchSource:0}: Error finding container 83a53a4e3dc9931ecc25adf46251d01a25e7401eb74794540442b4075f4301e6: Status 404 returned error can't find the container with id 83a53a4e3dc9931ecc25adf46251d01a25e7401eb74794540442b4075f4301e6 Jan 23 18:45:01 crc kubenswrapper[4688]: I0123 18:45:01.976934 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" event={"ID":"fdfe9b0f-a662-4411-89cb-a14697aceaab","Type":"ContainerStarted","Data":"bbba60f964453f0ff8906e9f1cae116efb16ed5965d6d3e9c3b139d48d22a113"} Jan 23 18:45:01 crc kubenswrapper[4688]: I0123 18:45:01.976990 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" event={"ID":"fdfe9b0f-a662-4411-89cb-a14697aceaab","Type":"ContainerStarted","Data":"83a53a4e3dc9931ecc25adf46251d01a25e7401eb74794540442b4075f4301e6"} Jan 23 18:45:02 crc kubenswrapper[4688]: I0123 18:45:02.990241 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" event={"ID":"fdfe9b0f-a662-4411-89cb-a14697aceaab","Type":"ContainerDied","Data":"bbba60f964453f0ff8906e9f1cae116efb16ed5965d6d3e9c3b139d48d22a113"} Jan 23 18:45:02 crc kubenswrapper[4688]: I0123 18:45:02.990203 4688 generic.go:334] "Generic (PLEG): container finished" podID="fdfe9b0f-a662-4411-89cb-a14697aceaab" containerID="bbba60f964453f0ff8906e9f1cae116efb16ed5965d6d3e9c3b139d48d22a113" exitCode=0 Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.366700 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.498592 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjg9c\" (UniqueName: \"kubernetes.io/projected/fdfe9b0f-a662-4411-89cb-a14697aceaab-kube-api-access-kjg9c\") pod \"fdfe9b0f-a662-4411-89cb-a14697aceaab\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.498676 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfe9b0f-a662-4411-89cb-a14697aceaab-config-volume\") pod \"fdfe9b0f-a662-4411-89cb-a14697aceaab\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.498745 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfe9b0f-a662-4411-89cb-a14697aceaab-secret-volume\") pod \"fdfe9b0f-a662-4411-89cb-a14697aceaab\" (UID: \"fdfe9b0f-a662-4411-89cb-a14697aceaab\") " Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.499466 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfe9b0f-a662-4411-89cb-a14697aceaab-config-volume" (OuterVolumeSpecName: "config-volume") pod "fdfe9b0f-a662-4411-89cb-a14697aceaab" (UID: "fdfe9b0f-a662-4411-89cb-a14697aceaab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.499864 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdfe9b0f-a662-4411-89cb-a14697aceaab-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.505831 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfe9b0f-a662-4411-89cb-a14697aceaab-kube-api-access-kjg9c" (OuterVolumeSpecName: "kube-api-access-kjg9c") pod "fdfe9b0f-a662-4411-89cb-a14697aceaab" (UID: "fdfe9b0f-a662-4411-89cb-a14697aceaab"). InnerVolumeSpecName "kube-api-access-kjg9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.506585 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdfe9b0f-a662-4411-89cb-a14697aceaab-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fdfe9b0f-a662-4411-89cb-a14697aceaab" (UID: "fdfe9b0f-a662-4411-89cb-a14697aceaab"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.601564 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjg9c\" (UniqueName: \"kubernetes.io/projected/fdfe9b0f-a662-4411-89cb-a14697aceaab-kube-api-access-kjg9c\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:04 crc kubenswrapper[4688]: I0123 18:45:04.601633 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdfe9b0f-a662-4411-89cb-a14697aceaab-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:05 crc kubenswrapper[4688]: I0123 18:45:05.011321 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" event={"ID":"fdfe9b0f-a662-4411-89cb-a14697aceaab","Type":"ContainerDied","Data":"83a53a4e3dc9931ecc25adf46251d01a25e7401eb74794540442b4075f4301e6"} Jan 23 18:45:05 crc kubenswrapper[4688]: I0123 18:45:05.011666 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83a53a4e3dc9931ecc25adf46251d01a25e7401eb74794540442b4075f4301e6" Jan 23 18:45:05 crc kubenswrapper[4688]: I0123 18:45:05.011477 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279" Jan 23 18:45:05 crc kubenswrapper[4688]: I0123 18:45:05.084605 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5"] Jan 23 18:45:05 crc kubenswrapper[4688]: I0123 18:45:05.092983 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486520-f4gd5"] Jan 23 18:45:05 crc kubenswrapper[4688]: I0123 18:45:05.371060 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9161065b-30e0-4eea-b615-829617fe9b26" path="/var/lib/kubelet/pods/9161065b-30e0-4eea-b615-829617fe9b26/volumes" Jan 23 18:45:44 crc kubenswrapper[4688]: I0123 18:45:44.467610 4688 generic.go:334] "Generic (PLEG): container finished" podID="2622f843-d555-43e1-b359-b490aab07eb2" containerID="04bfdf569e14285aec476bf8d448a7bc0b39d130a61ceaa52e0aeff8c0f465f5" exitCode=0 Jan 23 18:45:44 crc kubenswrapper[4688]: I0123 18:45:44.467699 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" event={"ID":"2622f843-d555-43e1-b359-b490aab07eb2","Type":"ContainerDied","Data":"04bfdf569e14285aec476bf8d448a7bc0b39d130a61ceaa52e0aeff8c0f465f5"} Jan 23 18:45:45 crc kubenswrapper[4688]: I0123 18:45:45.913730 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.019759 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-inventory\") pod \"2622f843-d555-43e1-b359-b490aab07eb2\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.019938 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv9xj\" (UniqueName: \"kubernetes.io/projected/2622f843-d555-43e1-b359-b490aab07eb2-kube-api-access-bv9xj\") pod \"2622f843-d555-43e1-b359-b490aab07eb2\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.020042 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2622f843-d555-43e1-b359-b490aab07eb2-ovncontroller-config-0\") pod \"2622f843-d555-43e1-b359-b490aab07eb2\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.020148 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ssh-key-openstack-edpm-ipam\") pod \"2622f843-d555-43e1-b359-b490aab07eb2\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.020230 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ovn-combined-ca-bundle\") pod \"2622f843-d555-43e1-b359-b490aab07eb2\" (UID: \"2622f843-d555-43e1-b359-b490aab07eb2\") " Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.026428 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2622f843-d555-43e1-b359-b490aab07eb2" (UID: "2622f843-d555-43e1-b359-b490aab07eb2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.027379 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2622f843-d555-43e1-b359-b490aab07eb2-kube-api-access-bv9xj" (OuterVolumeSpecName: "kube-api-access-bv9xj") pod "2622f843-d555-43e1-b359-b490aab07eb2" (UID: "2622f843-d555-43e1-b359-b490aab07eb2"). InnerVolumeSpecName "kube-api-access-bv9xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.059807 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-inventory" (OuterVolumeSpecName: "inventory") pod "2622f843-d555-43e1-b359-b490aab07eb2" (UID: "2622f843-d555-43e1-b359-b490aab07eb2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.067218 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2622f843-d555-43e1-b359-b490aab07eb2" (UID: "2622f843-d555-43e1-b359-b490aab07eb2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.078325 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2622f843-d555-43e1-b359-b490aab07eb2-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "2622f843-d555-43e1-b359-b490aab07eb2" (UID: "2622f843-d555-43e1-b359-b490aab07eb2"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.123152 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.123226 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv9xj\" (UniqueName: \"kubernetes.io/projected/2622f843-d555-43e1-b359-b490aab07eb2-kube-api-access-bv9xj\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.123246 4688 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/2622f843-d555-43e1-b359-b490aab07eb2-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.123259 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.123272 4688 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2622f843-d555-43e1-b359-b490aab07eb2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.492960 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" event={"ID":"2622f843-d555-43e1-b359-b490aab07eb2","Type":"ContainerDied","Data":"ceb2e03082e38bbf3a93b91a716d6cd25e9a655f6bdb273bae5203d268f4258b"} Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.493046 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb2e03082e38bbf3a93b91a716d6cd25e9a655f6bdb273bae5203d268f4258b" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.493061 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-288sf" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.605007 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj"] Jan 23 18:45:46 crc kubenswrapper[4688]: E0123 18:45:46.605738 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2622f843-d555-43e1-b359-b490aab07eb2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.605765 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="2622f843-d555-43e1-b359-b490aab07eb2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 18:45:46 crc kubenswrapper[4688]: E0123 18:45:46.605793 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfe9b0f-a662-4411-89cb-a14697aceaab" containerName="collect-profiles" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.605802 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfe9b0f-a662-4411-89cb-a14697aceaab" containerName="collect-profiles" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.606068 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfe9b0f-a662-4411-89cb-a14697aceaab" containerName="collect-profiles" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.606119 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="2622f843-d555-43e1-b359-b490aab07eb2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.607178 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.610058 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.610881 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.611127 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.611635 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.612462 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.619561 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj"] Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.620032 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.749669 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.749975 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.750087 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.750148 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.750293 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.750329 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlst\" (UniqueName: \"kubernetes.io/projected/f57f805b-6978-40eb-81c7-32d1ebde0a3f-kube-api-access-4nlst\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.853406 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.853538 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.853600 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.853667 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.853769 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.853800 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nlst\" (UniqueName: \"kubernetes.io/projected/f57f805b-6978-40eb-81c7-32d1ebde0a3f-kube-api-access-4nlst\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.857927 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.857973 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.858710 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.858981 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.859509 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.877244 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nlst\" (UniqueName: \"kubernetes.io/projected/f57f805b-6978-40eb-81c7-32d1ebde0a3f-kube-api-access-4nlst\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:46 crc kubenswrapper[4688]: I0123 18:45:46.984425 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:45:47 crc kubenswrapper[4688]: I0123 18:45:47.646703 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj"] Jan 23 18:45:47 crc kubenswrapper[4688]: I0123 18:45:47.669844 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:45:48 crc kubenswrapper[4688]: I0123 18:45:48.514975 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" event={"ID":"f57f805b-6978-40eb-81c7-32d1ebde0a3f","Type":"ContainerStarted","Data":"f04c107efdc4176b0524a41281bc7cd32bd2f821f83eeeec5ca245989bcd8b2b"} Jan 23 18:45:49 crc kubenswrapper[4688]: I0123 18:45:49.525390 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" event={"ID":"f57f805b-6978-40eb-81c7-32d1ebde0a3f","Type":"ContainerStarted","Data":"50b5782301450b2c262174f83698f88127864d87a172c368334763e8658c1df1"} Jan 23 18:45:49 crc kubenswrapper[4688]: I0123 18:45:49.553419 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" podStartSLOduration=3.016069203 podStartE2EDuration="3.553398053s" podCreationTimestamp="2026-01-23 18:45:46 +0000 UTC" firstStartedPulling="2026-01-23 18:45:47.669516949 +0000 UTC m=+2342.665341390" lastFinishedPulling="2026-01-23 18:45:48.206845799 +0000 UTC m=+2343.202670240" observedRunningTime="2026-01-23 18:45:49.545821434 +0000 UTC m=+2344.541645885" watchObservedRunningTime="2026-01-23 18:45:49.553398053 +0000 UTC m=+2344.549222494" Jan 23 18:45:57 crc kubenswrapper[4688]: I0123 18:45:57.457603 4688 scope.go:117] "RemoveContainer" containerID="7825ef0e66068d1a88c143b0d44f383320a8607a85d261ce3d6c74def72eebb5" Jan 23 18:46:06 crc kubenswrapper[4688]: I0123 18:46:06.965334 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:46:06 crc kubenswrapper[4688]: I0123 18:46:06.965813 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:46:36 crc kubenswrapper[4688]: I0123 18:46:36.964998 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:46:36 crc kubenswrapper[4688]: I0123 18:46:36.965971 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:46:46 crc kubenswrapper[4688]: I0123 18:46:46.084301 4688 generic.go:334] "Generic (PLEG): container finished" podID="f57f805b-6978-40eb-81c7-32d1ebde0a3f" containerID="50b5782301450b2c262174f83698f88127864d87a172c368334763e8658c1df1" exitCode=0 Jan 23 18:46:46 crc kubenswrapper[4688]: I0123 18:46:46.084406 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" event={"ID":"f57f805b-6978-40eb-81c7-32d1ebde0a3f","Type":"ContainerDied","Data":"50b5782301450b2c262174f83698f88127864d87a172c368334763e8658c1df1"} Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.594159 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.722851 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nlst\" (UniqueName: \"kubernetes.io/projected/f57f805b-6978-40eb-81c7-32d1ebde0a3f-kube-api-access-4nlst\") pod \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.722908 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-inventory\") pod \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.723014 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.723117 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-nova-metadata-neutron-config-0\") pod \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.723169 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-ssh-key-openstack-edpm-ipam\") pod \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.723281 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-metadata-combined-ca-bundle\") pod \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\" (UID: \"f57f805b-6978-40eb-81c7-32d1ebde0a3f\") " Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.729289 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f57f805b-6978-40eb-81c7-32d1ebde0a3f" (UID: "f57f805b-6978-40eb-81c7-32d1ebde0a3f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.729970 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57f805b-6978-40eb-81c7-32d1ebde0a3f-kube-api-access-4nlst" (OuterVolumeSpecName: "kube-api-access-4nlst") pod "f57f805b-6978-40eb-81c7-32d1ebde0a3f" (UID: "f57f805b-6978-40eb-81c7-32d1ebde0a3f"). InnerVolumeSpecName "kube-api-access-4nlst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.751718 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "f57f805b-6978-40eb-81c7-32d1ebde0a3f" (UID: "f57f805b-6978-40eb-81c7-32d1ebde0a3f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.754172 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "f57f805b-6978-40eb-81c7-32d1ebde0a3f" (UID: "f57f805b-6978-40eb-81c7-32d1ebde0a3f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.754514 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f57f805b-6978-40eb-81c7-32d1ebde0a3f" (UID: "f57f805b-6978-40eb-81c7-32d1ebde0a3f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.759656 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-inventory" (OuterVolumeSpecName: "inventory") pod "f57f805b-6978-40eb-81c7-32d1ebde0a3f" (UID: "f57f805b-6978-40eb-81c7-32d1ebde0a3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.826242 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.826292 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nlst\" (UniqueName: \"kubernetes.io/projected/f57f805b-6978-40eb-81c7-32d1ebde0a3f-kube-api-access-4nlst\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.826308 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.826324 4688 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.826345 4688 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:47 crc kubenswrapper[4688]: I0123 18:46:47.826357 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f57f805b-6978-40eb-81c7-32d1ebde0a3f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.106415 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" event={"ID":"f57f805b-6978-40eb-81c7-32d1ebde0a3f","Type":"ContainerDied","Data":"f04c107efdc4176b0524a41281bc7cd32bd2f821f83eeeec5ca245989bcd8b2b"} Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.106741 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04c107efdc4176b0524a41281bc7cd32bd2f821f83eeeec5ca245989bcd8b2b" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.106571 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.224902 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d"] Jan 23 18:46:48 crc kubenswrapper[4688]: E0123 18:46:48.225363 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57f805b-6978-40eb-81c7-32d1ebde0a3f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.225382 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57f805b-6978-40eb-81c7-32d1ebde0a3f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.227615 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57f805b-6978-40eb-81c7-32d1ebde0a3f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.228356 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.231832 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.232079 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.232715 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.232929 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.233235 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.239137 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d"] Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.340863 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.340938 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbsxw\" (UniqueName: \"kubernetes.io/projected/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-kube-api-access-wbsxw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.341038 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.341090 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.341269 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.442883 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.443381 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbsxw\" (UniqueName: \"kubernetes.io/projected/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-kube-api-access-wbsxw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.443465 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.443512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.443580 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.447717 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.449962 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.454077 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.454593 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.467575 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbsxw\" (UniqueName: \"kubernetes.io/projected/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-kube-api-access-wbsxw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4796d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:48 crc kubenswrapper[4688]: I0123 18:46:48.555618 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:46:49 crc kubenswrapper[4688]: I0123 18:46:49.173617 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d"] Jan 23 18:46:50 crc kubenswrapper[4688]: I0123 18:46:50.137137 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" event={"ID":"30fe4fb5-c06c-4741-b83b-b5b6eef2603d","Type":"ContainerStarted","Data":"970fecac6c126fc64d9f0d89b3411c69ce78a285bb9bd0cc6c27540f37acd767"} Jan 23 18:46:50 crc kubenswrapper[4688]: I0123 18:46:50.137687 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" event={"ID":"30fe4fb5-c06c-4741-b83b-b5b6eef2603d","Type":"ContainerStarted","Data":"df1209692bd413ebb057e670f793fef55ecfce5146f3c03f84497e5a4237f8d6"} Jan 23 18:46:50 crc kubenswrapper[4688]: I0123 18:46:50.162569 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" podStartSLOduration=1.641224663 podStartE2EDuration="2.162547209s" podCreationTimestamp="2026-01-23 18:46:48 +0000 UTC" firstStartedPulling="2026-01-23 18:46:49.176778929 +0000 UTC m=+2404.172603370" lastFinishedPulling="2026-01-23 18:46:49.698101445 +0000 UTC m=+2404.693925916" observedRunningTime="2026-01-23 18:46:50.156463334 +0000 UTC m=+2405.152287785" watchObservedRunningTime="2026-01-23 18:46:50.162547209 +0000 UTC m=+2405.158371650" Jan 23 18:47:06 crc kubenswrapper[4688]: I0123 18:47:06.964873 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:47:06 crc kubenswrapper[4688]: I0123 18:47:06.965492 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:47:06 crc kubenswrapper[4688]: I0123 18:47:06.965547 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:47:06 crc kubenswrapper[4688]: I0123 18:47:06.966817 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:47:06 crc kubenswrapper[4688]: I0123 18:47:06.966932 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" gracePeriod=600 Jan 23 18:47:07 crc kubenswrapper[4688]: E0123 18:47:07.092065 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:47:07 crc kubenswrapper[4688]: I0123 18:47:07.326094 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" exitCode=0 Jan 23 18:47:07 crc kubenswrapper[4688]: I0123 18:47:07.326153 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9"} Jan 23 18:47:07 crc kubenswrapper[4688]: I0123 18:47:07.326216 4688 scope.go:117] "RemoveContainer" containerID="0c3fcf621a6c46a20d9b2fec75c482f369fa6bd4f6d78fbde617289edb9547a1" Jan 23 18:47:07 crc kubenswrapper[4688]: I0123 18:47:07.327836 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:47:07 crc kubenswrapper[4688]: E0123 18:47:07.329349 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:47:21 crc kubenswrapper[4688]: I0123 18:47:21.357215 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:47:21 crc kubenswrapper[4688]: E0123 18:47:21.358301 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:47:33 crc kubenswrapper[4688]: I0123 18:47:33.357151 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:47:33 crc kubenswrapper[4688]: E0123 18:47:33.358110 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:47:46 crc kubenswrapper[4688]: I0123 18:47:46.357180 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:47:46 crc kubenswrapper[4688]: E0123 18:47:46.358083 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:48:01 crc kubenswrapper[4688]: I0123 18:48:01.358171 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:48:01 crc kubenswrapper[4688]: E0123 18:48:01.359323 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:48:15 crc kubenswrapper[4688]: I0123 18:48:15.375674 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:48:15 crc kubenswrapper[4688]: E0123 18:48:15.376497 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:48:30 crc kubenswrapper[4688]: I0123 18:48:30.357587 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:48:30 crc kubenswrapper[4688]: E0123 18:48:30.358386 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:48:42 crc kubenswrapper[4688]: I0123 18:48:42.356265 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:48:42 crc kubenswrapper[4688]: E0123 18:48:42.357319 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:48:57 crc kubenswrapper[4688]: I0123 18:48:57.356512 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:48:57 crc kubenswrapper[4688]: E0123 18:48:57.359313 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:49:12 crc kubenswrapper[4688]: I0123 18:49:12.360302 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:49:12 crc kubenswrapper[4688]: E0123 18:49:12.361374 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:49:26 crc kubenswrapper[4688]: I0123 18:49:26.355785 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:49:26 crc kubenswrapper[4688]: E0123 18:49:26.356734 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:49:41 crc kubenswrapper[4688]: I0123 18:49:41.357022 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:49:41 crc kubenswrapper[4688]: E0123 18:49:41.357792 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:49:53 crc kubenswrapper[4688]: I0123 18:49:53.357006 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:49:53 crc kubenswrapper[4688]: E0123 18:49:53.357781 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:50:06 crc kubenswrapper[4688]: I0123 18:50:06.356655 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:50:06 crc kubenswrapper[4688]: E0123 18:50:06.357259 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:50:18 crc kubenswrapper[4688]: I0123 18:50:18.356778 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:50:18 crc kubenswrapper[4688]: E0123 18:50:18.357875 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:50:28 crc kubenswrapper[4688]: I0123 18:50:28.513803 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:50:28 crc kubenswrapper[4688]: E0123 18:50:28.514773 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:50:42 crc kubenswrapper[4688]: I0123 18:50:42.356680 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:50:42 crc kubenswrapper[4688]: E0123 18:50:42.357660 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:50:55 crc kubenswrapper[4688]: I0123 18:50:55.363942 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:50:55 crc kubenswrapper[4688]: E0123 18:50:55.367873 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:51:09 crc kubenswrapper[4688]: I0123 18:51:09.356738 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:51:09 crc kubenswrapper[4688]: E0123 18:51:09.357753 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:51:23 crc kubenswrapper[4688]: I0123 18:51:23.359730 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:51:23 crc kubenswrapper[4688]: E0123 18:51:23.360838 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:51:36 crc kubenswrapper[4688]: I0123 18:51:36.357203 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:51:36 crc kubenswrapper[4688]: E0123 18:51:36.357970 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:51:46 crc kubenswrapper[4688]: I0123 18:51:46.601263 4688 generic.go:334] "Generic (PLEG): container finished" podID="30fe4fb5-c06c-4741-b83b-b5b6eef2603d" containerID="970fecac6c126fc64d9f0d89b3411c69ce78a285bb9bd0cc6c27540f37acd767" exitCode=0 Jan 23 18:51:46 crc kubenswrapper[4688]: I0123 18:51:46.601326 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" event={"ID":"30fe4fb5-c06c-4741-b83b-b5b6eef2603d","Type":"ContainerDied","Data":"970fecac6c126fc64d9f0d89b3411c69ce78a285bb9bd0cc6c27540f37acd767"} Jan 23 18:51:47 crc kubenswrapper[4688]: I0123 18:51:47.357703 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:51:47 crc kubenswrapper[4688]: E0123 18:51:47.358285 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.129414 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.220419 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-ssh-key-openstack-edpm-ipam\") pod \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.220531 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-inventory\") pod \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.220626 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-secret-0\") pod \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.220728 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-combined-ca-bundle\") pod \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.220796 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbsxw\" (UniqueName: \"kubernetes.io/projected/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-kube-api-access-wbsxw\") pod \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\" (UID: \"30fe4fb5-c06c-4741-b83b-b5b6eef2603d\") " Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.241691 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "30fe4fb5-c06c-4741-b83b-b5b6eef2603d" (UID: "30fe4fb5-c06c-4741-b83b-b5b6eef2603d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.241774 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-kube-api-access-wbsxw" (OuterVolumeSpecName: "kube-api-access-wbsxw") pod "30fe4fb5-c06c-4741-b83b-b5b6eef2603d" (UID: "30fe4fb5-c06c-4741-b83b-b5b6eef2603d"). InnerVolumeSpecName "kube-api-access-wbsxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.256804 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "30fe4fb5-c06c-4741-b83b-b5b6eef2603d" (UID: "30fe4fb5-c06c-4741-b83b-b5b6eef2603d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.259390 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-inventory" (OuterVolumeSpecName: "inventory") pod "30fe4fb5-c06c-4741-b83b-b5b6eef2603d" (UID: "30fe4fb5-c06c-4741-b83b-b5b6eef2603d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.271515 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30fe4fb5-c06c-4741-b83b-b5b6eef2603d" (UID: "30fe4fb5-c06c-4741-b83b-b5b6eef2603d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.325219 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.325258 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.325271 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.325284 4688 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.325296 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbsxw\" (UniqueName: \"kubernetes.io/projected/30fe4fb5-c06c-4741-b83b-b5b6eef2603d-kube-api-access-wbsxw\") on node \"crc\" DevicePath \"\"" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.627557 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" event={"ID":"30fe4fb5-c06c-4741-b83b-b5b6eef2603d","Type":"ContainerDied","Data":"df1209692bd413ebb057e670f793fef55ecfce5146f3c03f84497e5a4237f8d6"} Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.627638 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df1209692bd413ebb057e670f793fef55ecfce5146f3c03f84497e5a4237f8d6" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.627677 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4796d" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.749738 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4"] Jan 23 18:51:48 crc kubenswrapper[4688]: E0123 18:51:48.750391 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fe4fb5-c06c-4741-b83b-b5b6eef2603d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.750413 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fe4fb5-c06c-4741-b83b-b5b6eef2603d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.750654 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fe4fb5-c06c-4741-b83b-b5b6eef2603d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.751676 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.759710 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.760204 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.760241 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.760208 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.760373 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.760379 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.760414 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.766602 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4"] Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.833997 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834053 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92kq\" (UniqueName: \"kubernetes.io/projected/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-kube-api-access-j92kq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834089 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834120 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834136 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834281 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834337 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.834360 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936237 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936314 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936403 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936452 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92kq\" (UniqueName: \"kubernetes.io/projected/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-kube-api-access-j92kq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936512 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936575 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936606 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936810 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.936851 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.937940 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.940498 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.940586 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.940630 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.940751 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.943663 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.944602 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.944950 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:48 crc kubenswrapper[4688]: I0123 18:51:48.954054 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92kq\" (UniqueName: \"kubernetes.io/projected/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-kube-api-access-j92kq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-j64r4\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:49 crc kubenswrapper[4688]: I0123 18:51:49.116413 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:51:49 crc kubenswrapper[4688]: I0123 18:51:49.723357 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:51:49 crc kubenswrapper[4688]: I0123 18:51:49.723546 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4"] Jan 23 18:51:50 crc kubenswrapper[4688]: I0123 18:51:50.652576 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" event={"ID":"b1183bb9-7531-4cbc-b0b8-c3df2ba56953","Type":"ContainerStarted","Data":"6d4c0f394bda513e13854270da197aa307b600dd46827a5a5f78ab9af479a2ef"} Jan 23 18:51:50 crc kubenswrapper[4688]: I0123 18:51:50.652876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" event={"ID":"b1183bb9-7531-4cbc-b0b8-c3df2ba56953","Type":"ContainerStarted","Data":"339859ecdb1d8d94af4818e899fe5e682d41dec96b3e08cc4a2ad0309762a390"} Jan 23 18:51:50 crc kubenswrapper[4688]: I0123 18:51:50.674163 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" podStartSLOduration=2.206939672 podStartE2EDuration="2.674113474s" podCreationTimestamp="2026-01-23 18:51:48 +0000 UTC" firstStartedPulling="2026-01-23 18:51:49.722912946 +0000 UTC m=+2704.718737387" lastFinishedPulling="2026-01-23 18:51:50.190086748 +0000 UTC m=+2705.185911189" observedRunningTime="2026-01-23 18:51:50.669312316 +0000 UTC m=+2705.665136767" watchObservedRunningTime="2026-01-23 18:51:50.674113474 +0000 UTC m=+2705.669937915" Jan 23 18:52:00 crc kubenswrapper[4688]: I0123 18:52:00.357034 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:52:00 crc kubenswrapper[4688]: E0123 18:52:00.357814 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:52:12 crc kubenswrapper[4688]: I0123 18:52:12.356573 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:52:12 crc kubenswrapper[4688]: I0123 18:52:12.868782 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"4dace1e8bac725685c5c838520904ea097385b8c457cc5377920408be842f34c"} Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.758985 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rk2br"] Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.761599 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.779531 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rk2br"] Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.805424 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-catalog-content\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.805628 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-utilities\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.805663 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnk45\" (UniqueName: \"kubernetes.io/projected/ba4e7333-a94b-4307-800c-055845b5b6d9-kube-api-access-qnk45\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.907723 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-utilities\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.907785 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnk45\" (UniqueName: \"kubernetes.io/projected/ba4e7333-a94b-4307-800c-055845b5b6d9-kube-api-access-qnk45\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.907860 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-catalog-content\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.908412 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-utilities\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.908719 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-catalog-content\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:15 crc kubenswrapper[4688]: I0123 18:53:15.933795 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnk45\" (UniqueName: \"kubernetes.io/projected/ba4e7333-a94b-4307-800c-055845b5b6d9-kube-api-access-qnk45\") pod \"redhat-operators-rk2br\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:16 crc kubenswrapper[4688]: I0123 18:53:16.089152 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:16 crc kubenswrapper[4688]: I0123 18:53:16.627433 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rk2br"] Jan 23 18:53:17 crc kubenswrapper[4688]: I0123 18:53:17.570155 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerID="fdf175a9a685ba711ee9270269cf952662ed0f478441ff1caa6d075fe8ace81b" exitCode=0 Jan 23 18:53:17 crc kubenswrapper[4688]: I0123 18:53:17.570249 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rk2br" event={"ID":"ba4e7333-a94b-4307-800c-055845b5b6d9","Type":"ContainerDied","Data":"fdf175a9a685ba711ee9270269cf952662ed0f478441ff1caa6d075fe8ace81b"} Jan 23 18:53:17 crc kubenswrapper[4688]: I0123 18:53:17.570549 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rk2br" event={"ID":"ba4e7333-a94b-4307-800c-055845b5b6d9","Type":"ContainerStarted","Data":"5525a8af2e8827bd9854bd227c7833abd2531ed649be407aafd75c11e1a4fe94"} Jan 23 18:53:19 crc kubenswrapper[4688]: I0123 18:53:19.604915 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rk2br" event={"ID":"ba4e7333-a94b-4307-800c-055845b5b6d9","Type":"ContainerStarted","Data":"a50ea00cbbc997a0978dd734d9e419039b26d632daa5cd72bdf81d600fab8bd6"} Jan 23 18:53:22 crc kubenswrapper[4688]: I0123 18:53:22.651721 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerID="a50ea00cbbc997a0978dd734d9e419039b26d632daa5cd72bdf81d600fab8bd6" exitCode=0 Jan 23 18:53:22 crc kubenswrapper[4688]: I0123 18:53:22.651782 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rk2br" event={"ID":"ba4e7333-a94b-4307-800c-055845b5b6d9","Type":"ContainerDied","Data":"a50ea00cbbc997a0978dd734d9e419039b26d632daa5cd72bdf81d600fab8bd6"} Jan 23 18:53:24 crc kubenswrapper[4688]: I0123 18:53:24.673671 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rk2br" event={"ID":"ba4e7333-a94b-4307-800c-055845b5b6d9","Type":"ContainerStarted","Data":"7680b2d228e62dd6066982b462bf6f189a83f7a8d8105b59599ab6cf600db36a"} Jan 23 18:53:24 crc kubenswrapper[4688]: I0123 18:53:24.702084 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rk2br" podStartSLOduration=3.777488002 podStartE2EDuration="9.70205666s" podCreationTimestamp="2026-01-23 18:53:15 +0000 UTC" firstStartedPulling="2026-01-23 18:53:17.572689485 +0000 UTC m=+2792.568513926" lastFinishedPulling="2026-01-23 18:53:23.497258123 +0000 UTC m=+2798.493082584" observedRunningTime="2026-01-23 18:53:24.694520024 +0000 UTC m=+2799.690344485" watchObservedRunningTime="2026-01-23 18:53:24.70205666 +0000 UTC m=+2799.697881111" Jan 23 18:53:26 crc kubenswrapper[4688]: I0123 18:53:26.090750 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:26 crc kubenswrapper[4688]: I0123 18:53:26.091065 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:27 crc kubenswrapper[4688]: I0123 18:53:27.136783 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rk2br" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="registry-server" probeResult="failure" output=< Jan 23 18:53:27 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 18:53:27 crc kubenswrapper[4688]: > Jan 23 18:53:33 crc kubenswrapper[4688]: I0123 18:53:33.984758 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mzwlg"] Jan 23 18:53:33 crc kubenswrapper[4688]: I0123 18:53:33.990499 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:33 crc kubenswrapper[4688]: I0123 18:53:33.995493 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzwlg"] Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.138401 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-catalog-content\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.138530 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgpl\" (UniqueName: \"kubernetes.io/projected/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-kube-api-access-wsgpl\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.138674 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-utilities\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.171491 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h6grk"] Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.173590 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.185864 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6grk"] Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.240352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-utilities\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.240424 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-catalog-content\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.240509 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgpl\" (UniqueName: \"kubernetes.io/projected/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-kube-api-access-wsgpl\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.241452 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-utilities\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.241790 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-catalog-content\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.261519 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgpl\" (UniqueName: \"kubernetes.io/projected/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-kube-api-access-wsgpl\") pod \"redhat-marketplace-mzwlg\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.322378 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.342834 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7wll\" (UniqueName: \"kubernetes.io/projected/456d4564-df2b-4ccf-807d-6a10b3cb05d9-kube-api-access-g7wll\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.342934 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-utilities\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.342968 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-catalog-content\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.445787 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7wll\" (UniqueName: \"kubernetes.io/projected/456d4564-df2b-4ccf-807d-6a10b3cb05d9-kube-api-access-g7wll\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.446238 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-utilities\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.446288 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-catalog-content\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.446899 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-utilities\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.446989 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-catalog-content\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.465877 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7wll\" (UniqueName: \"kubernetes.io/projected/456d4564-df2b-4ccf-807d-6a10b3cb05d9-kube-api-access-g7wll\") pod \"community-operators-h6grk\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.496769 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:34 crc kubenswrapper[4688]: W0123 18:53:34.926154 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffb0ba81_fd03_4945_a6cc_8433a410c6d7.slice/crio-4fd37fcb1a00fb4e5f7434a6d036ea7867a913a686e8a17657db85627ee533a6 WatchSource:0}: Error finding container 4fd37fcb1a00fb4e5f7434a6d036ea7867a913a686e8a17657db85627ee533a6: Status 404 returned error can't find the container with id 4fd37fcb1a00fb4e5f7434a6d036ea7867a913a686e8a17657db85627ee533a6 Jan 23 18:53:34 crc kubenswrapper[4688]: I0123 18:53:34.926749 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzwlg"] Jan 23 18:53:35 crc kubenswrapper[4688]: I0123 18:53:35.070286 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h6grk"] Jan 23 18:53:35 crc kubenswrapper[4688]: W0123 18:53:35.074636 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456d4564_df2b_4ccf_807d_6a10b3cb05d9.slice/crio-40d4441f9a48c229f7af1bc8c732647403dee6c861189c35fc56aac388552ce4 WatchSource:0}: Error finding container 40d4441f9a48c229f7af1bc8c732647403dee6c861189c35fc56aac388552ce4: Status 404 returned error can't find the container with id 40d4441f9a48c229f7af1bc8c732647403dee6c861189c35fc56aac388552ce4 Jan 23 18:53:35 crc kubenswrapper[4688]: I0123 18:53:35.784812 4688 generic.go:334] "Generic (PLEG): container finished" podID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerID="d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96" exitCode=0 Jan 23 18:53:35 crc kubenswrapper[4688]: I0123 18:53:35.784873 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzwlg" event={"ID":"ffb0ba81-fd03-4945-a6cc-8433a410c6d7","Type":"ContainerDied","Data":"d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96"} Jan 23 18:53:35 crc kubenswrapper[4688]: I0123 18:53:35.785149 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzwlg" event={"ID":"ffb0ba81-fd03-4945-a6cc-8433a410c6d7","Type":"ContainerStarted","Data":"4fd37fcb1a00fb4e5f7434a6d036ea7867a913a686e8a17657db85627ee533a6"} Jan 23 18:53:35 crc kubenswrapper[4688]: I0123 18:53:35.787087 4688 generic.go:334] "Generic (PLEG): container finished" podID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerID="5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575" exitCode=0 Jan 23 18:53:35 crc kubenswrapper[4688]: I0123 18:53:35.787125 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6grk" event={"ID":"456d4564-df2b-4ccf-807d-6a10b3cb05d9","Type":"ContainerDied","Data":"5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575"} Jan 23 18:53:35 crc kubenswrapper[4688]: I0123 18:53:35.787153 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6grk" event={"ID":"456d4564-df2b-4ccf-807d-6a10b3cb05d9","Type":"ContainerStarted","Data":"40d4441f9a48c229f7af1bc8c732647403dee6c861189c35fc56aac388552ce4"} Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.149278 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.211119 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.372966 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dl5wt"] Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.387480 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dl5wt"] Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.387650 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.396449 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwhj\" (UniqueName: \"kubernetes.io/projected/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-kube-api-access-lmwhj\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.396616 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-utilities\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.396687 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-catalog-content\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.498823 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwhj\" (UniqueName: \"kubernetes.io/projected/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-kube-api-access-lmwhj\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.498979 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-utilities\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.499059 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-catalog-content\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.499435 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-utilities\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.499601 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-catalog-content\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.525450 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwhj\" (UniqueName: \"kubernetes.io/projected/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-kube-api-access-lmwhj\") pod \"certified-operators-dl5wt\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:36 crc kubenswrapper[4688]: I0123 18:53:36.775381 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:37 crc kubenswrapper[4688]: W0123 18:53:37.406889 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6acdce28_107f_4f08_a8c9_7e51e48f2bc9.slice/crio-889f7b9af648b91fc124e13650949b3a297b48e37911250ef373b44d5088a5b5 WatchSource:0}: Error finding container 889f7b9af648b91fc124e13650949b3a297b48e37911250ef373b44d5088a5b5: Status 404 returned error can't find the container with id 889f7b9af648b91fc124e13650949b3a297b48e37911250ef373b44d5088a5b5 Jan 23 18:53:37 crc kubenswrapper[4688]: I0123 18:53:37.415835 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dl5wt"] Jan 23 18:53:37 crc kubenswrapper[4688]: I0123 18:53:37.823520 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzwlg" event={"ID":"ffb0ba81-fd03-4945-a6cc-8433a410c6d7","Type":"ContainerStarted","Data":"32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b"} Jan 23 18:53:37 crc kubenswrapper[4688]: I0123 18:53:37.825155 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl5wt" event={"ID":"6acdce28-107f-4f08-a8c9-7e51e48f2bc9","Type":"ContainerStarted","Data":"889f7b9af648b91fc124e13650949b3a297b48e37911250ef373b44d5088a5b5"} Jan 23 18:53:37 crc kubenswrapper[4688]: I0123 18:53:37.827659 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6grk" event={"ID":"456d4564-df2b-4ccf-807d-6a10b3cb05d9","Type":"ContainerStarted","Data":"c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca"} Jan 23 18:53:38 crc kubenswrapper[4688]: I0123 18:53:38.844115 4688 generic.go:334] "Generic (PLEG): container finished" podID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerID="3c1fc3103308b88d16cf45455e6ae7a6e8007f45e34f164aa3855b8440e45749" exitCode=0 Jan 23 18:53:38 crc kubenswrapper[4688]: I0123 18:53:38.844296 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl5wt" event={"ID":"6acdce28-107f-4f08-a8c9-7e51e48f2bc9","Type":"ContainerDied","Data":"3c1fc3103308b88d16cf45455e6ae7a6e8007f45e34f164aa3855b8440e45749"} Jan 23 18:53:38 crc kubenswrapper[4688]: I0123 18:53:38.849661 4688 generic.go:334] "Generic (PLEG): container finished" podID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerID="c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca" exitCode=0 Jan 23 18:53:38 crc kubenswrapper[4688]: I0123 18:53:38.849766 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6grk" event={"ID":"456d4564-df2b-4ccf-807d-6a10b3cb05d9","Type":"ContainerDied","Data":"c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca"} Jan 23 18:53:38 crc kubenswrapper[4688]: I0123 18:53:38.852354 4688 generic.go:334] "Generic (PLEG): container finished" podID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerID="32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b" exitCode=0 Jan 23 18:53:38 crc kubenswrapper[4688]: I0123 18:53:38.904593 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzwlg" event={"ID":"ffb0ba81-fd03-4945-a6cc-8433a410c6d7","Type":"ContainerDied","Data":"32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b"} Jan 23 18:53:40 crc kubenswrapper[4688]: I0123 18:53:40.767824 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rk2br"] Jan 23 18:53:40 crc kubenswrapper[4688]: I0123 18:53:40.768594 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rk2br" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="registry-server" containerID="cri-o://7680b2d228e62dd6066982b462bf6f189a83f7a8d8105b59599ab6cf600db36a" gracePeriod=2 Jan 23 18:53:40 crc kubenswrapper[4688]: I0123 18:53:40.929282 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzwlg" event={"ID":"ffb0ba81-fd03-4945-a6cc-8433a410c6d7","Type":"ContainerStarted","Data":"cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa"} Jan 23 18:53:40 crc kubenswrapper[4688]: I0123 18:53:40.931923 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl5wt" event={"ID":"6acdce28-107f-4f08-a8c9-7e51e48f2bc9","Type":"ContainerStarted","Data":"2c6c4613c68e00cc61c9b5734da443e6497d232d46cd421a3f591f5fa5db71e1"} Jan 23 18:53:40 crc kubenswrapper[4688]: I0123 18:53:40.934843 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6grk" event={"ID":"456d4564-df2b-4ccf-807d-6a10b3cb05d9","Type":"ContainerStarted","Data":"d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b"} Jan 23 18:53:40 crc kubenswrapper[4688]: I0123 18:53:40.958267 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mzwlg" podStartSLOduration=3.948591452 podStartE2EDuration="7.958239644s" podCreationTimestamp="2026-01-23 18:53:33 +0000 UTC" firstStartedPulling="2026-01-23 18:53:35.786374746 +0000 UTC m=+2810.782199187" lastFinishedPulling="2026-01-23 18:53:39.796022938 +0000 UTC m=+2814.791847379" observedRunningTime="2026-01-23 18:53:40.952765707 +0000 UTC m=+2815.948590168" watchObservedRunningTime="2026-01-23 18:53:40.958239644 +0000 UTC m=+2815.954064105" Jan 23 18:53:40 crc kubenswrapper[4688]: I0123 18:53:40.989500 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h6grk" podStartSLOduration=3.043446586 podStartE2EDuration="6.989363788s" podCreationTimestamp="2026-01-23 18:53:34 +0000 UTC" firstStartedPulling="2026-01-23 18:53:35.790881036 +0000 UTC m=+2810.786705477" lastFinishedPulling="2026-01-23 18:53:39.736798238 +0000 UTC m=+2814.732622679" observedRunningTime="2026-01-23 18:53:40.979418992 +0000 UTC m=+2815.975243453" watchObservedRunningTime="2026-01-23 18:53:40.989363788 +0000 UTC m=+2815.985188239" Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.945733 4688 generic.go:334] "Generic (PLEG): container finished" podID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerID="2c6c4613c68e00cc61c9b5734da443e6497d232d46cd421a3f591f5fa5db71e1" exitCode=0 Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.945810 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl5wt" event={"ID":"6acdce28-107f-4f08-a8c9-7e51e48f2bc9","Type":"ContainerDied","Data":"2c6c4613c68e00cc61c9b5734da443e6497d232d46cd421a3f591f5fa5db71e1"} Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.950940 4688 generic.go:334] "Generic (PLEG): container finished" podID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerID="7680b2d228e62dd6066982b462bf6f189a83f7a8d8105b59599ab6cf600db36a" exitCode=0 Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.951027 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rk2br" event={"ID":"ba4e7333-a94b-4307-800c-055845b5b6d9","Type":"ContainerDied","Data":"7680b2d228e62dd6066982b462bf6f189a83f7a8d8105b59599ab6cf600db36a"} Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.951059 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rk2br" event={"ID":"ba4e7333-a94b-4307-800c-055845b5b6d9","Type":"ContainerDied","Data":"5525a8af2e8827bd9854bd227c7833abd2531ed649be407aafd75c11e1a4fe94"} Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.951073 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5525a8af2e8827bd9854bd227c7833abd2531ed649be407aafd75c11e1a4fe94" Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.975493 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.981583 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-catalog-content\") pod \"ba4e7333-a94b-4307-800c-055845b5b6d9\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.981661 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnk45\" (UniqueName: \"kubernetes.io/projected/ba4e7333-a94b-4307-800c-055845b5b6d9-kube-api-access-qnk45\") pod \"ba4e7333-a94b-4307-800c-055845b5b6d9\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.981713 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-utilities\") pod \"ba4e7333-a94b-4307-800c-055845b5b6d9\" (UID: \"ba4e7333-a94b-4307-800c-055845b5b6d9\") " Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.982682 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-utilities" (OuterVolumeSpecName: "utilities") pod "ba4e7333-a94b-4307-800c-055845b5b6d9" (UID: "ba4e7333-a94b-4307-800c-055845b5b6d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:41 crc kubenswrapper[4688]: I0123 18:53:41.987644 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba4e7333-a94b-4307-800c-055845b5b6d9-kube-api-access-qnk45" (OuterVolumeSpecName: "kube-api-access-qnk45") pod "ba4e7333-a94b-4307-800c-055845b5b6d9" (UID: "ba4e7333-a94b-4307-800c-055845b5b6d9"). InnerVolumeSpecName "kube-api-access-qnk45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:53:42 crc kubenswrapper[4688]: I0123 18:53:42.083318 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnk45\" (UniqueName: \"kubernetes.io/projected/ba4e7333-a94b-4307-800c-055845b5b6d9-kube-api-access-qnk45\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:42 crc kubenswrapper[4688]: I0123 18:53:42.083357 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:42 crc kubenswrapper[4688]: I0123 18:53:42.120370 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba4e7333-a94b-4307-800c-055845b5b6d9" (UID: "ba4e7333-a94b-4307-800c-055845b5b6d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:42 crc kubenswrapper[4688]: I0123 18:53:42.185127 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba4e7333-a94b-4307-800c-055845b5b6d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:42 crc kubenswrapper[4688]: I0123 18:53:42.962916 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rk2br" Jan 23 18:53:43 crc kubenswrapper[4688]: I0123 18:53:43.028377 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rk2br"] Jan 23 18:53:43 crc kubenswrapper[4688]: I0123 18:53:43.037461 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rk2br"] Jan 23 18:53:43 crc kubenswrapper[4688]: I0123 18:53:43.368127 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" path="/var/lib/kubelet/pods/ba4e7333-a94b-4307-800c-055845b5b6d9/volumes" Jan 23 18:53:43 crc kubenswrapper[4688]: I0123 18:53:43.975975 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl5wt" event={"ID":"6acdce28-107f-4f08-a8c9-7e51e48f2bc9","Type":"ContainerStarted","Data":"9ec0afe3cdd5edc16f92dc0691e9b2d782ffc4fe7606eea6658c75df3f3dde03"} Jan 23 18:53:43 crc kubenswrapper[4688]: I0123 18:53:43.996070 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dl5wt" podStartSLOduration=4.052524303 podStartE2EDuration="7.996052006s" podCreationTimestamp="2026-01-23 18:53:36 +0000 UTC" firstStartedPulling="2026-01-23 18:53:38.846861719 +0000 UTC m=+2813.842686160" lastFinishedPulling="2026-01-23 18:53:42.790389422 +0000 UTC m=+2817.786213863" observedRunningTime="2026-01-23 18:53:43.992620077 +0000 UTC m=+2818.988444528" watchObservedRunningTime="2026-01-23 18:53:43.996052006 +0000 UTC m=+2818.991876447" Jan 23 18:53:44 crc kubenswrapper[4688]: I0123 18:53:44.323761 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:44 crc kubenswrapper[4688]: I0123 18:53:44.324043 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:44 crc kubenswrapper[4688]: I0123 18:53:44.369309 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:44 crc kubenswrapper[4688]: I0123 18:53:44.497790 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:44 crc kubenswrapper[4688]: I0123 18:53:44.497836 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:44 crc kubenswrapper[4688]: I0123 18:53:44.549510 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:45 crc kubenswrapper[4688]: I0123 18:53:45.093499 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:46 crc kubenswrapper[4688]: I0123 18:53:46.775562 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:46 crc kubenswrapper[4688]: I0123 18:53:46.776803 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:46 crc kubenswrapper[4688]: I0123 18:53:46.832747 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:47 crc kubenswrapper[4688]: I0123 18:53:47.966936 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6grk"] Jan 23 18:53:47 crc kubenswrapper[4688]: I0123 18:53:47.967518 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h6grk" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="registry-server" containerID="cri-o://d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b" gracePeriod=2 Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.076913 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.443855 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.536534 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-utilities\") pod \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.536606 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7wll\" (UniqueName: \"kubernetes.io/projected/456d4564-df2b-4ccf-807d-6a10b3cb05d9-kube-api-access-g7wll\") pod \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.536671 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-catalog-content\") pod \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\" (UID: \"456d4564-df2b-4ccf-807d-6a10b3cb05d9\") " Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.537338 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-utilities" (OuterVolumeSpecName: "utilities") pod "456d4564-df2b-4ccf-807d-6a10b3cb05d9" (UID: "456d4564-df2b-4ccf-807d-6a10b3cb05d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.566540 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456d4564-df2b-4ccf-807d-6a10b3cb05d9-kube-api-access-g7wll" (OuterVolumeSpecName: "kube-api-access-g7wll") pod "456d4564-df2b-4ccf-807d-6a10b3cb05d9" (UID: "456d4564-df2b-4ccf-807d-6a10b3cb05d9"). InnerVolumeSpecName "kube-api-access-g7wll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.592437 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "456d4564-df2b-4ccf-807d-6a10b3cb05d9" (UID: "456d4564-df2b-4ccf-807d-6a10b3cb05d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.639708 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.639743 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7wll\" (UniqueName: \"kubernetes.io/projected/456d4564-df2b-4ccf-807d-6a10b3cb05d9-kube-api-access-g7wll\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:48 crc kubenswrapper[4688]: I0123 18:53:48.639753 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456d4564-df2b-4ccf-807d-6a10b3cb05d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.034418 4688 generic.go:334] "Generic (PLEG): container finished" podID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerID="d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b" exitCode=0 Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.034482 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6grk" event={"ID":"456d4564-df2b-4ccf-807d-6a10b3cb05d9","Type":"ContainerDied","Data":"d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b"} Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.034548 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h6grk" event={"ID":"456d4564-df2b-4ccf-807d-6a10b3cb05d9","Type":"ContainerDied","Data":"40d4441f9a48c229f7af1bc8c732647403dee6c861189c35fc56aac388552ce4"} Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.034502 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h6grk" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.034571 4688 scope.go:117] "RemoveContainer" containerID="d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.082553 4688 scope.go:117] "RemoveContainer" containerID="c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.098463 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h6grk"] Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.107708 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h6grk"] Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.133532 4688 scope.go:117] "RemoveContainer" containerID="5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.155623 4688 scope.go:117] "RemoveContainer" containerID="d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b" Jan 23 18:53:49 crc kubenswrapper[4688]: E0123 18:53:49.156299 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b\": container with ID starting with d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b not found: ID does not exist" containerID="d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.156338 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b"} err="failed to get container status \"d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b\": rpc error: code = NotFound desc = could not find container \"d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b\": container with ID starting with d0b564f71ca3c25a4cd25bbd3ebd56569e2fed0a311c0f5c20057a2606d0271b not found: ID does not exist" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.156361 4688 scope.go:117] "RemoveContainer" containerID="c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca" Jan 23 18:53:49 crc kubenswrapper[4688]: E0123 18:53:49.157015 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca\": container with ID starting with c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca not found: ID does not exist" containerID="c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.157157 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca"} err="failed to get container status \"c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca\": rpc error: code = NotFound desc = could not find container \"c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca\": container with ID starting with c5558bdbb8bc177241bb6267e054fd9985f817ff558794d0934845ea72bf6dca not found: ID does not exist" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.157295 4688 scope.go:117] "RemoveContainer" containerID="5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575" Jan 23 18:53:49 crc kubenswrapper[4688]: E0123 18:53:49.157742 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575\": container with ID starting with 5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575 not found: ID does not exist" containerID="5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.157769 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575"} err="failed to get container status \"5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575\": rpc error: code = NotFound desc = could not find container \"5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575\": container with ID starting with 5747c8f8caa6a085374dd0911258f9c8e0861d4622abaf166b95f25f748c3575 not found: ID does not exist" Jan 23 18:53:49 crc kubenswrapper[4688]: I0123 18:53:49.372226 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" path="/var/lib/kubelet/pods/456d4564-df2b-4ccf-807d-6a10b3cb05d9/volumes" Jan 23 18:53:50 crc kubenswrapper[4688]: I0123 18:53:50.365216 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dl5wt"] Jan 23 18:53:50 crc kubenswrapper[4688]: I0123 18:53:50.365801 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dl5wt" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="registry-server" containerID="cri-o://9ec0afe3cdd5edc16f92dc0691e9b2d782ffc4fe7606eea6658c75df3f3dde03" gracePeriod=2 Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.067653 4688 generic.go:334] "Generic (PLEG): container finished" podID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerID="9ec0afe3cdd5edc16f92dc0691e9b2d782ffc4fe7606eea6658c75df3f3dde03" exitCode=0 Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.067725 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl5wt" event={"ID":"6acdce28-107f-4f08-a8c9-7e51e48f2bc9","Type":"ContainerDied","Data":"9ec0afe3cdd5edc16f92dc0691e9b2d782ffc4fe7606eea6658c75df3f3dde03"} Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.435759 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.447543 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-utilities\") pod \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.448985 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-utilities" (OuterVolumeSpecName: "utilities") pod "6acdce28-107f-4f08-a8c9-7e51e48f2bc9" (UID: "6acdce28-107f-4f08-a8c9-7e51e48f2bc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.549496 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmwhj\" (UniqueName: \"kubernetes.io/projected/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-kube-api-access-lmwhj\") pod \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.549611 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-catalog-content\") pod \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\" (UID: \"6acdce28-107f-4f08-a8c9-7e51e48f2bc9\") " Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.550280 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.555139 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-kube-api-access-lmwhj" (OuterVolumeSpecName: "kube-api-access-lmwhj") pod "6acdce28-107f-4f08-a8c9-7e51e48f2bc9" (UID: "6acdce28-107f-4f08-a8c9-7e51e48f2bc9"). InnerVolumeSpecName "kube-api-access-lmwhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.601888 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6acdce28-107f-4f08-a8c9-7e51e48f2bc9" (UID: "6acdce28-107f-4f08-a8c9-7e51e48f2bc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.652246 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmwhj\" (UniqueName: \"kubernetes.io/projected/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-kube-api-access-lmwhj\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:51 crc kubenswrapper[4688]: I0123 18:53:51.652283 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acdce28-107f-4f08-a8c9-7e51e48f2bc9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:52 crc kubenswrapper[4688]: I0123 18:53:52.081864 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl5wt" event={"ID":"6acdce28-107f-4f08-a8c9-7e51e48f2bc9","Type":"ContainerDied","Data":"889f7b9af648b91fc124e13650949b3a297b48e37911250ef373b44d5088a5b5"} Jan 23 18:53:52 crc kubenswrapper[4688]: I0123 18:53:52.081932 4688 scope.go:117] "RemoveContainer" containerID="9ec0afe3cdd5edc16f92dc0691e9b2d782ffc4fe7606eea6658c75df3f3dde03" Jan 23 18:53:52 crc kubenswrapper[4688]: I0123 18:53:52.081949 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl5wt" Jan 23 18:53:52 crc kubenswrapper[4688]: I0123 18:53:52.103847 4688 scope.go:117] "RemoveContainer" containerID="2c6c4613c68e00cc61c9b5734da443e6497d232d46cd421a3f591f5fa5db71e1" Jan 23 18:53:52 crc kubenswrapper[4688]: I0123 18:53:52.123117 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dl5wt"] Jan 23 18:53:52 crc kubenswrapper[4688]: I0123 18:53:52.132568 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dl5wt"] Jan 23 18:53:52 crc kubenswrapper[4688]: I0123 18:53:52.151232 4688 scope.go:117] "RemoveContainer" containerID="3c1fc3103308b88d16cf45455e6ae7a6e8007f45e34f164aa3855b8440e45749" Jan 23 18:53:53 crc kubenswrapper[4688]: I0123 18:53:53.371592 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" path="/var/lib/kubelet/pods/6acdce28-107f-4f08-a8c9-7e51e48f2bc9/volumes" Jan 23 18:53:54 crc kubenswrapper[4688]: I0123 18:53:54.385393 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:54 crc kubenswrapper[4688]: I0123 18:53:54.965025 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzwlg"] Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.112389 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mzwlg" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="registry-server" containerID="cri-o://cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa" gracePeriod=2 Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.700587 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.849857 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-catalog-content\") pod \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.849952 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsgpl\" (UniqueName: \"kubernetes.io/projected/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-kube-api-access-wsgpl\") pod \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.850139 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-utilities\") pod \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\" (UID: \"ffb0ba81-fd03-4945-a6cc-8433a410c6d7\") " Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.850955 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-utilities" (OuterVolumeSpecName: "utilities") pod "ffb0ba81-fd03-4945-a6cc-8433a410c6d7" (UID: "ffb0ba81-fd03-4945-a6cc-8433a410c6d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.856487 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-kube-api-access-wsgpl" (OuterVolumeSpecName: "kube-api-access-wsgpl") pod "ffb0ba81-fd03-4945-a6cc-8433a410c6d7" (UID: "ffb0ba81-fd03-4945-a6cc-8433a410c6d7"). InnerVolumeSpecName "kube-api-access-wsgpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.878780 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffb0ba81-fd03-4945-a6cc-8433a410c6d7" (UID: "ffb0ba81-fd03-4945-a6cc-8433a410c6d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.952921 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.952959 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:55 crc kubenswrapper[4688]: I0123 18:53:55.952973 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsgpl\" (UniqueName: \"kubernetes.io/projected/ffb0ba81-fd03-4945-a6cc-8433a410c6d7-kube-api-access-wsgpl\") on node \"crc\" DevicePath \"\"" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.125956 4688 generic.go:334] "Generic (PLEG): container finished" podID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerID="cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa" exitCode=0 Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.126003 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzwlg" event={"ID":"ffb0ba81-fd03-4945-a6cc-8433a410c6d7","Type":"ContainerDied","Data":"cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa"} Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.126031 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzwlg" event={"ID":"ffb0ba81-fd03-4945-a6cc-8433a410c6d7","Type":"ContainerDied","Data":"4fd37fcb1a00fb4e5f7434a6d036ea7867a913a686e8a17657db85627ee533a6"} Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.126036 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzwlg" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.126051 4688 scope.go:117] "RemoveContainer" containerID="cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.149077 4688 scope.go:117] "RemoveContainer" containerID="32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.168792 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzwlg"] Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.177422 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzwlg"] Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.196488 4688 scope.go:117] "RemoveContainer" containerID="d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.247650 4688 scope.go:117] "RemoveContainer" containerID="cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa" Jan 23 18:53:56 crc kubenswrapper[4688]: E0123 18:53:56.248253 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa\": container with ID starting with cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa not found: ID does not exist" containerID="cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.248287 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa"} err="failed to get container status \"cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa\": rpc error: code = NotFound desc = could not find container \"cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa\": container with ID starting with cce7126f278d60c5f7d526d96e4bdd3f041e1e701db3db65bc603c32a64d2bfa not found: ID does not exist" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.248317 4688 scope.go:117] "RemoveContainer" containerID="32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b" Jan 23 18:53:56 crc kubenswrapper[4688]: E0123 18:53:56.248753 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b\": container with ID starting with 32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b not found: ID does not exist" containerID="32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.248805 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b"} err="failed to get container status \"32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b\": rpc error: code = NotFound desc = could not find container \"32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b\": container with ID starting with 32ff7f29be894c580fa4ae19ce19c043b7b89bdb984ef06d7b098524b902640b not found: ID does not exist" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.248840 4688 scope.go:117] "RemoveContainer" containerID="d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96" Jan 23 18:53:56 crc kubenswrapper[4688]: E0123 18:53:56.249390 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96\": container with ID starting with d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96 not found: ID does not exist" containerID="d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96" Jan 23 18:53:56 crc kubenswrapper[4688]: I0123 18:53:56.249479 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96"} err="failed to get container status \"d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96\": rpc error: code = NotFound desc = could not find container \"d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96\": container with ID starting with d102bf828a67e7e9a218f0f349c7fb6a3b51934ae11ccb431925d97542a1fa96 not found: ID does not exist" Jan 23 18:53:57 crc kubenswrapper[4688]: I0123 18:53:57.369458 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" path="/var/lib/kubelet/pods/ffb0ba81-fd03-4945-a6cc-8433a410c6d7/volumes" Jan 23 18:54:22 crc kubenswrapper[4688]: I0123 18:54:22.398729 4688 generic.go:334] "Generic (PLEG): container finished" podID="b1183bb9-7531-4cbc-b0b8-c3df2ba56953" containerID="6d4c0f394bda513e13854270da197aa307b600dd46827a5a5f78ab9af479a2ef" exitCode=0 Jan 23 18:54:22 crc kubenswrapper[4688]: I0123 18:54:22.398969 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" event={"ID":"b1183bb9-7531-4cbc-b0b8-c3df2ba56953","Type":"ContainerDied","Data":"6d4c0f394bda513e13854270da197aa307b600dd46827a5a5f78ab9af479a2ef"} Jan 23 18:54:23 crc kubenswrapper[4688]: I0123 18:54:23.929395 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080378 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-0\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080565 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j92kq\" (UniqueName: \"kubernetes.io/projected/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-kube-api-access-j92kq\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080609 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-1\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080731 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-1\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080805 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-inventory\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080831 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-0\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080860 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-extra-config-0\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080900 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-combined-ca-bundle\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.080926 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-ssh-key-openstack-edpm-ipam\") pod \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\" (UID: \"b1183bb9-7531-4cbc-b0b8-c3df2ba56953\") " Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.087325 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-kube-api-access-j92kq" (OuterVolumeSpecName: "kube-api-access-j92kq") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "kube-api-access-j92kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.087897 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.111519 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.114122 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.116228 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.116332 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-inventory" (OuterVolumeSpecName: "inventory") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.117743 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.121294 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.123681 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b1183bb9-7531-4cbc-b0b8-c3df2ba56953" (UID: "b1183bb9-7531-4cbc-b0b8-c3df2ba56953"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183587 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j92kq\" (UniqueName: \"kubernetes.io/projected/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-kube-api-access-j92kq\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183873 4688 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183882 4688 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183892 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183900 4688 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183909 4688 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183917 4688 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183951 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.183960 4688 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b1183bb9-7531-4cbc-b0b8-c3df2ba56953-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.417974 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" event={"ID":"b1183bb9-7531-4cbc-b0b8-c3df2ba56953","Type":"ContainerDied","Data":"339859ecdb1d8d94af4818e899fe5e682d41dec96b3e08cc4a2ad0309762a390"} Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.418013 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-j64r4" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.418018 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="339859ecdb1d8d94af4818e899fe5e682d41dec96b3e08cc4a2ad0309762a390" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.541983 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx"] Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542654 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542669 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542676 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542682 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542693 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542703 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542717 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542723 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542742 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542748 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542758 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542764 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542776 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542783 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542794 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542800 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542812 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1183bb9-7531-4cbc-b0b8-c3df2ba56953" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542817 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1183bb9-7531-4cbc-b0b8-c3df2ba56953" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542827 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542832 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542858 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542863 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542871 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542876 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="extract-utilities" Jan 23 18:54:24 crc kubenswrapper[4688]: E0123 18:54:24.542885 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.542891 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="extract-content" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.543110 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1183bb9-7531-4cbc-b0b8-c3df2ba56953" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.543130 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4e7333-a94b-4307-800c-055845b5b6d9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.543142 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="456d4564-df2b-4ccf-807d-6a10b3cb05d9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.543154 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb0ba81-fd03-4945-a6cc-8433a410c6d7" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.543166 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acdce28-107f-4f08-a8c9-7e51e48f2bc9" containerName="registry-server" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.543928 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.547951 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.547950 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.548064 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.548410 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.548619 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b5qcj" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.552328 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx"] Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.695749 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.695806 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9v6\" (UniqueName: \"kubernetes.io/projected/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-kube-api-access-zc9v6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.695841 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.695873 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.695974 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.696013 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.696061 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.798102 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.798256 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.798294 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9v6\" (UniqueName: \"kubernetes.io/projected/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-kube-api-access-zc9v6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.798318 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.798347 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.798411 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.798451 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.802929 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.804431 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.806427 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.807019 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.807605 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.815853 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.821582 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9v6\" (UniqueName: \"kubernetes.io/projected/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-kube-api-access-zc9v6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:24 crc kubenswrapper[4688]: I0123 18:54:24.876455 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:54:26 crc kubenswrapper[4688]: I0123 18:54:26.352898 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx"] Jan 23 18:54:26 crc kubenswrapper[4688]: I0123 18:54:26.871210 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" event={"ID":"fc299185-3ca0-4d2b-b24c-ab75fc65d49a","Type":"ContainerStarted","Data":"db50de6d57f087a0fd0f77622b70972b639818012f2e034c9468164e6908a6f7"} Jan 23 18:54:27 crc kubenswrapper[4688]: I0123 18:54:27.883657 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" event={"ID":"fc299185-3ca0-4d2b-b24c-ab75fc65d49a","Type":"ContainerStarted","Data":"b4abe999538630d95ae755f3bed38569965d72324487ae105e546e2f78309e24"} Jan 23 18:54:27 crc kubenswrapper[4688]: I0123 18:54:27.905303 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" podStartSLOduration=3.185424545 podStartE2EDuration="3.905283412s" podCreationTimestamp="2026-01-23 18:54:24 +0000 UTC" firstStartedPulling="2026-01-23 18:54:26.335548966 +0000 UTC m=+2861.331373407" lastFinishedPulling="2026-01-23 18:54:27.055407843 +0000 UTC m=+2862.051232274" observedRunningTime="2026-01-23 18:54:27.900722161 +0000 UTC m=+2862.896546602" watchObservedRunningTime="2026-01-23 18:54:27.905283412 +0000 UTC m=+2862.901107853" Jan 23 18:54:36 crc kubenswrapper[4688]: I0123 18:54:36.965453 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:54:36 crc kubenswrapper[4688]: I0123 18:54:36.966078 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:55:06 crc kubenswrapper[4688]: I0123 18:55:06.965536 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:55:06 crc kubenswrapper[4688]: I0123 18:55:06.966103 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:55:36 crc kubenswrapper[4688]: I0123 18:55:36.965271 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:55:36 crc kubenswrapper[4688]: I0123 18:55:36.966296 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:55:36 crc kubenswrapper[4688]: I0123 18:55:36.966517 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:55:36 crc kubenswrapper[4688]: I0123 18:55:36.967507 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4dace1e8bac725685c5c838520904ea097385b8c457cc5377920408be842f34c"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:55:36 crc kubenswrapper[4688]: I0123 18:55:36.967558 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://4dace1e8bac725685c5c838520904ea097385b8c457cc5377920408be842f34c" gracePeriod=600 Jan 23 18:55:37 crc kubenswrapper[4688]: I0123 18:55:37.594399 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="4dace1e8bac725685c5c838520904ea097385b8c457cc5377920408be842f34c" exitCode=0 Jan 23 18:55:37 crc kubenswrapper[4688]: I0123 18:55:37.594476 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"4dace1e8bac725685c5c838520904ea097385b8c457cc5377920408be842f34c"} Jan 23 18:55:37 crc kubenswrapper[4688]: I0123 18:55:37.594848 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5"} Jan 23 18:55:37 crc kubenswrapper[4688]: I0123 18:55:37.594885 4688 scope.go:117] "RemoveContainer" containerID="e8e70e033fff563ad0b8eced33d85c51de197a13560d34ba6d2abae53b9732f9" Jan 23 18:56:50 crc kubenswrapper[4688]: I0123 18:56:50.352875 4688 generic.go:334] "Generic (PLEG): container finished" podID="fc299185-3ca0-4d2b-b24c-ab75fc65d49a" containerID="b4abe999538630d95ae755f3bed38569965d72324487ae105e546e2f78309e24" exitCode=0 Jan 23 18:56:50 crc kubenswrapper[4688]: I0123 18:56:50.353014 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" event={"ID":"fc299185-3ca0-4d2b-b24c-ab75fc65d49a","Type":"ContainerDied","Data":"b4abe999538630d95ae755f3bed38569965d72324487ae105e546e2f78309e24"} Jan 23 18:56:51 crc kubenswrapper[4688]: I0123 18:56:51.865127 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.011693 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9v6\" (UniqueName: \"kubernetes.io/projected/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-kube-api-access-zc9v6\") pod \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.011781 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ssh-key-openstack-edpm-ipam\") pod \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.011823 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-telemetry-combined-ca-bundle\") pod \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.011850 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-2\") pod \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.012004 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-0\") pod \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.012067 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-inventory\") pod \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.012159 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-1\") pod \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\" (UID: \"fc299185-3ca0-4d2b-b24c-ab75fc65d49a\") " Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.032455 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-kube-api-access-zc9v6" (OuterVolumeSpecName: "kube-api-access-zc9v6") pod "fc299185-3ca0-4d2b-b24c-ab75fc65d49a" (UID: "fc299185-3ca0-4d2b-b24c-ab75fc65d49a"). InnerVolumeSpecName "kube-api-access-zc9v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.038432 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fc299185-3ca0-4d2b-b24c-ab75fc65d49a" (UID: "fc299185-3ca0-4d2b-b24c-ab75fc65d49a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.045658 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fc299185-3ca0-4d2b-b24c-ab75fc65d49a" (UID: "fc299185-3ca0-4d2b-b24c-ab75fc65d49a"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.046624 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc299185-3ca0-4d2b-b24c-ab75fc65d49a" (UID: "fc299185-3ca0-4d2b-b24c-ab75fc65d49a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.047751 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fc299185-3ca0-4d2b-b24c-ab75fc65d49a" (UID: "fc299185-3ca0-4d2b-b24c-ab75fc65d49a"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.060992 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-inventory" (OuterVolumeSpecName: "inventory") pod "fc299185-3ca0-4d2b-b24c-ab75fc65d49a" (UID: "fc299185-3ca0-4d2b-b24c-ab75fc65d49a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.067409 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fc299185-3ca0-4d2b-b24c-ab75fc65d49a" (UID: "fc299185-3ca0-4d2b-b24c-ab75fc65d49a"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.115598 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.115876 4688 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.115977 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.116162 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9v6\" (UniqueName: \"kubernetes.io/projected/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-kube-api-access-zc9v6\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.116312 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.116391 4688 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.116470 4688 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fc299185-3ca0-4d2b-b24c-ab75fc65d49a-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.375104 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" event={"ID":"fc299185-3ca0-4d2b-b24c-ab75fc65d49a","Type":"ContainerDied","Data":"db50de6d57f087a0fd0f77622b70972b639818012f2e034c9468164e6908a6f7"} Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.375167 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db50de6d57f087a0fd0f77622b70972b639818012f2e034c9468164e6908a6f7" Jan 23 18:56:52 crc kubenswrapper[4688]: I0123 18:56:52.375178 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx" Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.349700 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.350707 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="prometheus" containerID="cri-o://b606d533ad41879be922e9db33b200589a61c425d40ec7008b8e132b3dd84b07" gracePeriod=600 Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.350912 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="thanos-sidecar" containerID="cri-o://bc398ab3de719e1581112306e319393e864bd7705c14c74761f7db65f3ec03c4" gracePeriod=600 Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.350969 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="config-reloader" containerID="cri-o://3aff1ace720d5b537d98cf9c6d20923f424d2d22f33058b8f3ea933ecd9eb3b0" gracePeriod=600 Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.762787 4688 generic.go:334] "Generic (PLEG): container finished" podID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerID="bc398ab3de719e1581112306e319393e864bd7705c14c74761f7db65f3ec03c4" exitCode=0 Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.763114 4688 generic.go:334] "Generic (PLEG): container finished" podID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerID="3aff1ace720d5b537d98cf9c6d20923f424d2d22f33058b8f3ea933ecd9eb3b0" exitCode=0 Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.763124 4688 generic.go:334] "Generic (PLEG): container finished" podID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerID="b606d533ad41879be922e9db33b200589a61c425d40ec7008b8e132b3dd84b07" exitCode=0 Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.762881 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerDied","Data":"bc398ab3de719e1581112306e319393e864bd7705c14c74761f7db65f3ec03c4"} Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.763164 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerDied","Data":"3aff1ace720d5b537d98cf9c6d20923f424d2d22f33058b8f3ea933ecd9eb3b0"} Jan 23 18:57:32 crc kubenswrapper[4688]: I0123 18:57:32.763178 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerDied","Data":"b606d533ad41879be922e9db33b200589a61c425d40ec7008b8e132b3dd84b07"} Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.326404 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.488966 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489040 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5g82\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-kube-api-access-l5g82\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-0\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489115 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489239 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489287 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-1\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489338 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-thanos-prometheus-http-client-file\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489380 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/138a44f4-e939-4138-8f9d-aae45c6aef1f-config-out\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489410 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489442 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-2\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489476 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-secret-combined-ca-bundle\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489540 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-config\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489570 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.489673 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-tls-assets\") pod \"138a44f4-e939-4138-8f9d-aae45c6aef1f\" (UID: \"138a44f4-e939-4138-8f9d-aae45c6aef1f\") " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.490205 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.490696 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.491357 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.491381 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.491394 4688 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/138a44f4-e939-4138-8f9d-aae45c6aef1f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.497282 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.497329 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-config" (OuterVolumeSpecName: "config") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.497617 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.497919 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.498863 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-kube-api-access-l5g82" (OuterVolumeSpecName: "kube-api-access-l5g82") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "kube-api-access-l5g82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.498884 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/138a44f4-e939-4138-8f9d-aae45c6aef1f-config-out" (OuterVolumeSpecName: "config-out") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.499623 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.502886 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.516833 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.583511 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config" (OuterVolumeSpecName: "web-config") pod "138a44f4-e939-4138-8f9d-aae45c6aef1f" (UID: "138a44f4-e939-4138-8f9d-aae45c6aef1f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594209 4688 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/138a44f4-e939-4138-8f9d-aae45c6aef1f-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594250 4688 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594266 4688 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594282 4688 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-config\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594295 4688 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594413 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") on node \"crc\" " Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594432 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5g82\" (UniqueName: \"kubernetes.io/projected/138a44f4-e939-4138-8f9d-aae45c6aef1f-kube-api-access-l5g82\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594451 4688 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594468 4688 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.594483 4688 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/138a44f4-e939-4138-8f9d-aae45c6aef1f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.632706 4688 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.633063 4688 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075") on node "crc" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.696490 4688 reconciler_common.go:293] "Volume detached for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") on node \"crc\" DevicePath \"\"" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.775036 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"138a44f4-e939-4138-8f9d-aae45c6aef1f","Type":"ContainerDied","Data":"ca76d0b238b164bee22636e006578ef8df671f8532d749e718e9ce5f7f41decc"} Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.775137 4688 scope.go:117] "RemoveContainer" containerID="bc398ab3de719e1581112306e319393e864bd7705c14c74761f7db65f3ec03c4" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.775143 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.795885 4688 scope.go:117] "RemoveContainer" containerID="3aff1ace720d5b537d98cf9c6d20923f424d2d22f33058b8f3ea933ecd9eb3b0" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.817844 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.833248 4688 scope.go:117] "RemoveContainer" containerID="b606d533ad41879be922e9db33b200589a61c425d40ec7008b8e132b3dd84b07" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.893446 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.896631 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:57:33 crc kubenswrapper[4688]: E0123 18:57:33.897275 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="prometheus" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897298 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="prometheus" Jan 23 18:57:33 crc kubenswrapper[4688]: E0123 18:57:33.897339 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="config-reloader" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897349 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="config-reloader" Jan 23 18:57:33 crc kubenswrapper[4688]: E0123 18:57:33.897366 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="thanos-sidecar" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897374 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="thanos-sidecar" Jan 23 18:57:33 crc kubenswrapper[4688]: E0123 18:57:33.897390 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc299185-3ca0-4d2b-b24c-ab75fc65d49a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897399 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc299185-3ca0-4d2b-b24c-ab75fc65d49a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 18:57:33 crc kubenswrapper[4688]: E0123 18:57:33.897415 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="init-config-reloader" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897423 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="init-config-reloader" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897593 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="thanos-sidecar" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897612 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc299185-3ca0-4d2b-b24c-ab75fc65d49a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897628 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="config-reloader" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.897640 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" containerName="prometheus" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.899685 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.904521 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.904982 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.905142 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.905407 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7vbgs" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.905619 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.905966 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.906079 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.912043 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.912526 4688 scope.go:117] "RemoveContainer" containerID="8535910b0624778667f6ed21e1126b11bf194bcc76875fe4a5c9cfeab8771ea0" Jan 23 18:57:33 crc kubenswrapper[4688]: I0123 18:57:33.924078 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.003814 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.003911 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.003964 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004012 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d4a7e167-5a90-4925-8004-520317d7826f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004039 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004107 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004149 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004173 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d4a7e167-5a90-4925-8004-520317d7826f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004236 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004264 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmmxm\" (UniqueName: \"kubernetes.io/projected/d4a7e167-5a90-4925-8004-520317d7826f-kube-api-access-mmmxm\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004302 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004335 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.004394 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106493 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d4a7e167-5a90-4925-8004-520317d7826f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106556 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106632 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106669 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106692 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d4a7e167-5a90-4925-8004-520317d7826f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106729 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106750 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmmxm\" (UniqueName: \"kubernetes.io/projected/d4a7e167-5a90-4925-8004-520317d7826f-kube-api-access-mmmxm\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106788 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106813 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106874 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106923 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.106983 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.107034 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.107986 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.109122 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.110026 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d4a7e167-5a90-4925-8004-520317d7826f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.123118 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.124780 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d4a7e167-5a90-4925-8004-520317d7826f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.130057 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.130597 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.133749 4688 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.133793 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b863878884b5da2d8536161babd136087c9985963bc488b510e2c38ec292fd7e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.134039 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d4a7e167-5a90-4925-8004-520317d7826f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.138168 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmmxm\" (UniqueName: \"kubernetes.io/projected/d4a7e167-5a90-4925-8004-520317d7826f-kube-api-access-mmmxm\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.140469 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.157748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.161488 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a7e167-5a90-4925-8004-520317d7826f-config\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.218054 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec16bb05-ad39-4f2f-9dbc-1ca83a5e5075\") pod \"prometheus-metric-storage-0\" (UID: \"d4a7e167-5a90-4925-8004-520317d7826f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.234579 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.714477 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 18:57:34 crc kubenswrapper[4688]: W0123 18:57:34.720214 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4a7e167_5a90_4925_8004_520317d7826f.slice/crio-4d5bca82f3de83657317a36a3a6629bcfb29908cb62576ef03b85bbec4f43faa WatchSource:0}: Error finding container 4d5bca82f3de83657317a36a3a6629bcfb29908cb62576ef03b85bbec4f43faa: Status 404 returned error can't find the container with id 4d5bca82f3de83657317a36a3a6629bcfb29908cb62576ef03b85bbec4f43faa Jan 23 18:57:34 crc kubenswrapper[4688]: I0123 18:57:34.788600 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4a7e167-5a90-4925-8004-520317d7826f","Type":"ContainerStarted","Data":"4d5bca82f3de83657317a36a3a6629bcfb29908cb62576ef03b85bbec4f43faa"} Jan 23 18:57:35 crc kubenswrapper[4688]: I0123 18:57:35.367543 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="138a44f4-e939-4138-8f9d-aae45c6aef1f" path="/var/lib/kubelet/pods/138a44f4-e939-4138-8f9d-aae45c6aef1f/volumes" Jan 23 18:57:38 crc kubenswrapper[4688]: I0123 18:57:38.823559 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4a7e167-5a90-4925-8004-520317d7826f","Type":"ContainerStarted","Data":"3af288d07ff7f3228f6f0329ad68fa755e2c1074c059df11f5fddddb9e817c0f"} Jan 23 18:57:46 crc kubenswrapper[4688]: I0123 18:57:46.905528 4688 generic.go:334] "Generic (PLEG): container finished" podID="d4a7e167-5a90-4925-8004-520317d7826f" containerID="3af288d07ff7f3228f6f0329ad68fa755e2c1074c059df11f5fddddb9e817c0f" exitCode=0 Jan 23 18:57:46 crc kubenswrapper[4688]: I0123 18:57:46.905657 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4a7e167-5a90-4925-8004-520317d7826f","Type":"ContainerDied","Data":"3af288d07ff7f3228f6f0329ad68fa755e2c1074c059df11f5fddddb9e817c0f"} Jan 23 18:57:47 crc kubenswrapper[4688]: I0123 18:57:47.916083 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4a7e167-5a90-4925-8004-520317d7826f","Type":"ContainerStarted","Data":"64323f85550145705d956ab6447c3691103c2720c7a642680b8ecd2b45968a98"} Jan 23 18:57:51 crc kubenswrapper[4688]: I0123 18:57:51.958066 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4a7e167-5a90-4925-8004-520317d7826f","Type":"ContainerStarted","Data":"9c4fdc85f846a11ba6fced832516c559043cb0e9a2ad40cf7d1f24606ffe338a"} Jan 23 18:57:51 crc kubenswrapper[4688]: I0123 18:57:51.958708 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d4a7e167-5a90-4925-8004-520317d7826f","Type":"ContainerStarted","Data":"3f54752a4009706bb0ca90f94ce5930d1eccb559c57c7df0d07109574ef10ff4"} Jan 23 18:57:51 crc kubenswrapper[4688]: I0123 18:57:51.991021 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.990979269 podStartE2EDuration="18.990979269s" podCreationTimestamp="2026-01-23 18:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:51.989936799 +0000 UTC m=+3066.985761250" watchObservedRunningTime="2026-01-23 18:57:51.990979269 +0000 UTC m=+3066.986803720" Jan 23 18:57:54 crc kubenswrapper[4688]: I0123 18:57:54.234761 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 18:58:04 crc kubenswrapper[4688]: I0123 18:58:04.235888 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 18:58:04 crc kubenswrapper[4688]: I0123 18:58:04.242891 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 18:58:05 crc kubenswrapper[4688]: I0123 18:58:05.105468 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 18:58:06 crc kubenswrapper[4688]: I0123 18:58:06.967623 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:58:06 crc kubenswrapper[4688]: I0123 18:58:06.967966 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.415769 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.417715 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.420545 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-twb2t" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.420565 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.420877 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.421325 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.429792 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.567569 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58g5h\" (UniqueName: \"kubernetes.io/projected/18226ae9-4f88-4376-a16d-b59b78912de7-kube-api-access-58g5h\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.567664 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.567705 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.567728 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.567887 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.567954 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.568024 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.568060 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.568240 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-config-data\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.670727 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.670822 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.670860 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.670930 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.670971 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.671022 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.671053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.671124 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-config-data\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.671238 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58g5h\" (UniqueName: \"kubernetes.io/projected/18226ae9-4f88-4376-a16d-b59b78912de7-kube-api-access-58g5h\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.671832 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.672090 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.672097 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.673641 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-config-data\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.677681 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.677924 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.685912 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.691520 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:23 crc kubenswrapper[4688]: I0123 18:58:23.707000 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58g5h\" (UniqueName: \"kubernetes.io/projected/18226ae9-4f88-4376-a16d-b59b78912de7-kube-api-access-58g5h\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:24 crc kubenswrapper[4688]: I0123 18:58:24.169964 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " pod="openstack/tempest-tests-tempest" Jan 23 18:58:24 crc kubenswrapper[4688]: I0123 18:58:24.175081 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 18:58:24 crc kubenswrapper[4688]: I0123 18:58:24.654079 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 18:58:24 crc kubenswrapper[4688]: I0123 18:58:24.658774 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 18:58:25 crc kubenswrapper[4688]: I0123 18:58:25.326450 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18226ae9-4f88-4376-a16d-b59b78912de7","Type":"ContainerStarted","Data":"39155dfe65e97ef160356b779bb2f7fbb3d32e52eef7046e5b691e4a0eaeecdb"} Jan 23 18:58:36 crc kubenswrapper[4688]: I0123 18:58:36.965148 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:58:36 crc kubenswrapper[4688]: I0123 18:58:36.966155 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:58:38 crc kubenswrapper[4688]: I0123 18:58:38.488800 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18226ae9-4f88-4376-a16d-b59b78912de7","Type":"ContainerStarted","Data":"b1d7f69f0f60e3abb32de44f107233e88cb609f88b141a5eaf012e37d3a5a9a0"} Jan 23 18:58:38 crc kubenswrapper[4688]: I0123 18:58:38.512812 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.996678412 podStartE2EDuration="16.512781197s" podCreationTimestamp="2026-01-23 18:58:22 +0000 UTC" firstStartedPulling="2026-01-23 18:58:24.653797511 +0000 UTC m=+3099.649621952" lastFinishedPulling="2026-01-23 18:58:37.169900296 +0000 UTC m=+3112.165724737" observedRunningTime="2026-01-23 18:58:38.508991578 +0000 UTC m=+3113.504816049" watchObservedRunningTime="2026-01-23 18:58:38.512781197 +0000 UTC m=+3113.508605648" Jan 23 18:59:06 crc kubenswrapper[4688]: I0123 18:59:06.964754 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 18:59:06 crc kubenswrapper[4688]: I0123 18:59:06.965363 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 18:59:06 crc kubenswrapper[4688]: I0123 18:59:06.965416 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 18:59:06 crc kubenswrapper[4688]: I0123 18:59:06.966329 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 18:59:06 crc kubenswrapper[4688]: I0123 18:59:06.966383 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" gracePeriod=600 Jan 23 18:59:11 crc kubenswrapper[4688]: E0123 18:59:11.059609 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:59:11 crc kubenswrapper[4688]: I0123 18:59:11.821530 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" exitCode=0 Jan 23 18:59:11 crc kubenswrapper[4688]: I0123 18:59:11.821578 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5"} Jan 23 18:59:11 crc kubenswrapper[4688]: I0123 18:59:11.821617 4688 scope.go:117] "RemoveContainer" containerID="4dace1e8bac725685c5c838520904ea097385b8c457cc5377920408be842f34c" Jan 23 18:59:11 crc kubenswrapper[4688]: I0123 18:59:11.822526 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 18:59:11 crc kubenswrapper[4688]: E0123 18:59:11.822824 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:59:22 crc kubenswrapper[4688]: I0123 18:59:22.358061 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 18:59:22 crc kubenswrapper[4688]: E0123 18:59:22.358979 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:59:37 crc kubenswrapper[4688]: I0123 18:59:37.356832 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 18:59:37 crc kubenswrapper[4688]: E0123 18:59:37.357643 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:59:48 crc kubenswrapper[4688]: I0123 18:59:48.356689 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 18:59:48 crc kubenswrapper[4688]: E0123 18:59:48.357668 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 18:59:57 crc kubenswrapper[4688]: I0123 18:59:57.863696 4688 scope.go:117] "RemoveContainer" containerID="7680b2d228e62dd6066982b462bf6f189a83f7a8d8105b59599ab6cf600db36a" Jan 23 18:59:57 crc kubenswrapper[4688]: I0123 18:59:57.890451 4688 scope.go:117] "RemoveContainer" containerID="a50ea00cbbc997a0978dd734d9e419039b26d632daa5cd72bdf81d600fab8bd6" Jan 23 18:59:57 crc kubenswrapper[4688]: I0123 18:59:57.923510 4688 scope.go:117] "RemoveContainer" containerID="fdf175a9a685ba711ee9270269cf952662ed0f478441ff1caa6d075fe8ace81b" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.153101 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf"] Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.154819 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.163467 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.163607 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.178064 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8xs\" (UniqueName: \"kubernetes.io/projected/ace5702f-36da-49f7-8a3e-536784bf7b2a-kube-api-access-5d8xs\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.178127 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ace5702f-36da-49f7-8a3e-536784bf7b2a-secret-volume\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.178287 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ace5702f-36da-49f7-8a3e-536784bf7b2a-config-volume\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.188158 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf"] Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.279973 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8xs\" (UniqueName: \"kubernetes.io/projected/ace5702f-36da-49f7-8a3e-536784bf7b2a-kube-api-access-5d8xs\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.280022 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ace5702f-36da-49f7-8a3e-536784bf7b2a-secret-volume\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.280114 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ace5702f-36da-49f7-8a3e-536784bf7b2a-config-volume\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.281006 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ace5702f-36da-49f7-8a3e-536784bf7b2a-config-volume\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.289923 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ace5702f-36da-49f7-8a3e-536784bf7b2a-secret-volume\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.299753 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8xs\" (UniqueName: \"kubernetes.io/projected/ace5702f-36da-49f7-8a3e-536784bf7b2a-kube-api-access-5d8xs\") pod \"collect-profiles-29486580-c6ksf\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.477345 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:00 crc kubenswrapper[4688]: I0123 19:00:00.942628 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf"] Jan 23 19:00:01 crc kubenswrapper[4688]: I0123 19:00:01.299984 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" event={"ID":"ace5702f-36da-49f7-8a3e-536784bf7b2a","Type":"ContainerStarted","Data":"d1e55df49a8662a17de4793a57d75ed262469a99d6d79a1bc408a56f8b5742f4"} Jan 23 19:00:01 crc kubenswrapper[4688]: I0123 19:00:01.300372 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" event={"ID":"ace5702f-36da-49f7-8a3e-536784bf7b2a","Type":"ContainerStarted","Data":"18f63f7609fce4aa05ac184d85d4635bc21721e5a623e96b62b19eba2b147182"} Jan 23 19:00:01 crc kubenswrapper[4688]: I0123 19:00:01.327246 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" podStartSLOduration=1.327220453 podStartE2EDuration="1.327220453s" podCreationTimestamp="2026-01-23 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:00:01.314532321 +0000 UTC m=+3196.310356762" watchObservedRunningTime="2026-01-23 19:00:01.327220453 +0000 UTC m=+3196.323044894" Jan 23 19:00:02 crc kubenswrapper[4688]: I0123 19:00:02.310428 4688 generic.go:334] "Generic (PLEG): container finished" podID="ace5702f-36da-49f7-8a3e-536784bf7b2a" containerID="d1e55df49a8662a17de4793a57d75ed262469a99d6d79a1bc408a56f8b5742f4" exitCode=0 Jan 23 19:00:02 crc kubenswrapper[4688]: I0123 19:00:02.311158 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" event={"ID":"ace5702f-36da-49f7-8a3e-536784bf7b2a","Type":"ContainerDied","Data":"d1e55df49a8662a17de4793a57d75ed262469a99d6d79a1bc408a56f8b5742f4"} Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.357791 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:00:03 crc kubenswrapper[4688]: E0123 19:00:03.358136 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.696971 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.855431 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ace5702f-36da-49f7-8a3e-536784bf7b2a-config-volume\") pod \"ace5702f-36da-49f7-8a3e-536784bf7b2a\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.855745 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d8xs\" (UniqueName: \"kubernetes.io/projected/ace5702f-36da-49f7-8a3e-536784bf7b2a-kube-api-access-5d8xs\") pod \"ace5702f-36da-49f7-8a3e-536784bf7b2a\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.855847 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ace5702f-36da-49f7-8a3e-536784bf7b2a-secret-volume\") pod \"ace5702f-36da-49f7-8a3e-536784bf7b2a\" (UID: \"ace5702f-36da-49f7-8a3e-536784bf7b2a\") " Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.856536 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace5702f-36da-49f7-8a3e-536784bf7b2a-config-volume" (OuterVolumeSpecName: "config-volume") pod "ace5702f-36da-49f7-8a3e-536784bf7b2a" (UID: "ace5702f-36da-49f7-8a3e-536784bf7b2a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.861624 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace5702f-36da-49f7-8a3e-536784bf7b2a-kube-api-access-5d8xs" (OuterVolumeSpecName: "kube-api-access-5d8xs") pod "ace5702f-36da-49f7-8a3e-536784bf7b2a" (UID: "ace5702f-36da-49f7-8a3e-536784bf7b2a"). InnerVolumeSpecName "kube-api-access-5d8xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.862492 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace5702f-36da-49f7-8a3e-536784bf7b2a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ace5702f-36da-49f7-8a3e-536784bf7b2a" (UID: "ace5702f-36da-49f7-8a3e-536784bf7b2a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.959057 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ace5702f-36da-49f7-8a3e-536784bf7b2a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.959116 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d8xs\" (UniqueName: \"kubernetes.io/projected/ace5702f-36da-49f7-8a3e-536784bf7b2a-kube-api-access-5d8xs\") on node \"crc\" DevicePath \"\"" Jan 23 19:00:03 crc kubenswrapper[4688]: I0123 19:00:03.959134 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ace5702f-36da-49f7-8a3e-536784bf7b2a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:00:04 crc kubenswrapper[4688]: I0123 19:00:04.328457 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" event={"ID":"ace5702f-36da-49f7-8a3e-536784bf7b2a","Type":"ContainerDied","Data":"18f63f7609fce4aa05ac184d85d4635bc21721e5a623e96b62b19eba2b147182"} Jan 23 19:00:04 crc kubenswrapper[4688]: I0123 19:00:04.328768 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f63f7609fce4aa05ac184d85d4635bc21721e5a623e96b62b19eba2b147182" Jan 23 19:00:04 crc kubenswrapper[4688]: I0123 19:00:04.329232 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf" Jan 23 19:00:04 crc kubenswrapper[4688]: I0123 19:00:04.433506 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj"] Jan 23 19:00:04 crc kubenswrapper[4688]: I0123 19:00:04.444687 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486535-4xpnj"] Jan 23 19:00:05 crc kubenswrapper[4688]: I0123 19:00:05.371848 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7040e9ba-84d7-420e-81ac-f1aac91d5a47" path="/var/lib/kubelet/pods/7040e9ba-84d7-420e-81ac-f1aac91d5a47/volumes" Jan 23 19:00:18 crc kubenswrapper[4688]: I0123 19:00:18.357178 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:00:18 crc kubenswrapper[4688]: E0123 19:00:18.358710 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:00:32 crc kubenswrapper[4688]: I0123 19:00:32.357798 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:00:32 crc kubenswrapper[4688]: E0123 19:00:32.359128 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:00:47 crc kubenswrapper[4688]: I0123 19:00:47.356877 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:00:47 crc kubenswrapper[4688]: E0123 19:00:47.357623 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:00:58 crc kubenswrapper[4688]: I0123 19:00:58.008555 4688 scope.go:117] "RemoveContainer" containerID="33c65188357ecddc115db8df4c1ad64ee0205ff703068f2e9047ac200fd3b57e" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.148907 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486581-smm8p"] Jan 23 19:01:00 crc kubenswrapper[4688]: E0123 19:01:00.149724 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace5702f-36da-49f7-8a3e-536784bf7b2a" containerName="collect-profiles" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.149737 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace5702f-36da-49f7-8a3e-536784bf7b2a" containerName="collect-profiles" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.149959 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace5702f-36da-49f7-8a3e-536784bf7b2a" containerName="collect-profiles" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.150797 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.160417 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486581-smm8p"] Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.258009 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-fernet-keys\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.258114 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndzb\" (UniqueName: \"kubernetes.io/projected/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-kube-api-access-4ndzb\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.258146 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-config-data\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.258407 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-combined-ca-bundle\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.359962 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-combined-ca-bundle\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.360142 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-fernet-keys\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.360225 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ndzb\" (UniqueName: \"kubernetes.io/projected/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-kube-api-access-4ndzb\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.360247 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-config-data\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.366894 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-combined-ca-bundle\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.368605 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-config-data\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.374944 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-fernet-keys\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.378172 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ndzb\" (UniqueName: \"kubernetes.io/projected/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-kube-api-access-4ndzb\") pod \"keystone-cron-29486581-smm8p\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.468466 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:00 crc kubenswrapper[4688]: I0123 19:01:00.912028 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486581-smm8p"] Jan 23 19:01:00 crc kubenswrapper[4688]: W0123 19:01:00.930335 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc94d940e_9cfe_4bd3_bc70_fab5a68e0f20.slice/crio-902ed44ad8ada8610b6ef61401401ea6e7fa065f42dd3635bfa6e60fcf48b39d WatchSource:0}: Error finding container 902ed44ad8ada8610b6ef61401401ea6e7fa065f42dd3635bfa6e60fcf48b39d: Status 404 returned error can't find the container with id 902ed44ad8ada8610b6ef61401401ea6e7fa065f42dd3635bfa6e60fcf48b39d Jan 23 19:01:01 crc kubenswrapper[4688]: I0123 19:01:01.844943 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-smm8p" event={"ID":"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20","Type":"ContainerStarted","Data":"4d79ed7dbeac47b143a91bc44f9fa0380fb8bff91e34cbbc6f0bb637d4fde1cc"} Jan 23 19:01:01 crc kubenswrapper[4688]: I0123 19:01:01.845319 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-smm8p" event={"ID":"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20","Type":"ContainerStarted","Data":"902ed44ad8ada8610b6ef61401401ea6e7fa065f42dd3635bfa6e60fcf48b39d"} Jan 23 19:01:01 crc kubenswrapper[4688]: I0123 19:01:01.874216 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486581-smm8p" podStartSLOduration=1.874170366 podStartE2EDuration="1.874170366s" podCreationTimestamp="2026-01-23 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:01:01.8729115 +0000 UTC m=+3256.868735941" watchObservedRunningTime="2026-01-23 19:01:01.874170366 +0000 UTC m=+3256.869994817" Jan 23 19:01:02 crc kubenswrapper[4688]: I0123 19:01:02.357269 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:01:02 crc kubenswrapper[4688]: E0123 19:01:02.357601 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:01:03 crc kubenswrapper[4688]: I0123 19:01:03.881078 4688 generic.go:334] "Generic (PLEG): container finished" podID="c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" containerID="4d79ed7dbeac47b143a91bc44f9fa0380fb8bff91e34cbbc6f0bb637d4fde1cc" exitCode=0 Jan 23 19:01:03 crc kubenswrapper[4688]: I0123 19:01:03.881228 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-smm8p" event={"ID":"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20","Type":"ContainerDied","Data":"4d79ed7dbeac47b143a91bc44f9fa0380fb8bff91e34cbbc6f0bb637d4fde1cc"} Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.310144 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.373059 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-combined-ca-bundle\") pod \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.373134 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ndzb\" (UniqueName: \"kubernetes.io/projected/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-kube-api-access-4ndzb\") pod \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.373318 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-fernet-keys\") pod \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.374281 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-config-data\") pod \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\" (UID: \"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20\") " Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.380868 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-kube-api-access-4ndzb" (OuterVolumeSpecName: "kube-api-access-4ndzb") pod "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" (UID: "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20"). InnerVolumeSpecName "kube-api-access-4ndzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.382376 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" (UID: "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.405093 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" (UID: "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.428772 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-config-data" (OuterVolumeSpecName: "config-data") pod "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" (UID: "c94d940e-9cfe-4bd3-bc70-fab5a68e0f20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.477138 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.477195 4688 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.477206 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ndzb\" (UniqueName: \"kubernetes.io/projected/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-kube-api-access-4ndzb\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.477215 4688 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94d940e-9cfe-4bd3-bc70-fab5a68e0f20-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.901993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486581-smm8p" event={"ID":"c94d940e-9cfe-4bd3-bc70-fab5a68e0f20","Type":"ContainerDied","Data":"902ed44ad8ada8610b6ef61401401ea6e7fa065f42dd3635bfa6e60fcf48b39d"} Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.902628 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="902ed44ad8ada8610b6ef61401401ea6e7fa065f42dd3635bfa6e60fcf48b39d" Jan 23 19:01:05 crc kubenswrapper[4688]: I0123 19:01:05.902046 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486581-smm8p" Jan 23 19:01:15 crc kubenswrapper[4688]: I0123 19:01:15.363288 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:01:15 crc kubenswrapper[4688]: E0123 19:01:15.363980 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:01:26 crc kubenswrapper[4688]: I0123 19:01:26.356413 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:01:26 crc kubenswrapper[4688]: E0123 19:01:26.357300 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:01:40 crc kubenswrapper[4688]: I0123 19:01:40.356436 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:01:40 crc kubenswrapper[4688]: E0123 19:01:40.357320 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:01:55 crc kubenswrapper[4688]: I0123 19:01:55.362441 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:01:55 crc kubenswrapper[4688]: E0123 19:01:55.363272 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:02:06 crc kubenswrapper[4688]: I0123 19:02:06.375808 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:02:06 crc kubenswrapper[4688]: E0123 19:02:06.376623 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:02:18 crc kubenswrapper[4688]: I0123 19:02:18.356824 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:02:18 crc kubenswrapper[4688]: E0123 19:02:18.357920 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:02:29 crc kubenswrapper[4688]: I0123 19:02:29.356742 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:02:29 crc kubenswrapper[4688]: E0123 19:02:29.357861 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:02:41 crc kubenswrapper[4688]: I0123 19:02:41.357007 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:02:41 crc kubenswrapper[4688]: E0123 19:02:41.357811 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:02:56 crc kubenswrapper[4688]: I0123 19:02:56.358838 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:02:56 crc kubenswrapper[4688]: E0123 19:02:56.360383 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:03:10 crc kubenswrapper[4688]: I0123 19:03:10.357353 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:03:10 crc kubenswrapper[4688]: E0123 19:03:10.358228 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:03:21 crc kubenswrapper[4688]: I0123 19:03:21.356725 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:03:21 crc kubenswrapper[4688]: E0123 19:03:21.357436 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:03:34 crc kubenswrapper[4688]: I0123 19:03:34.357166 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:03:34 crc kubenswrapper[4688]: E0123 19:03:34.357901 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:03:48 crc kubenswrapper[4688]: I0123 19:03:48.356720 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:03:48 crc kubenswrapper[4688]: E0123 19:03:48.357476 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.029454 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v82zs"] Jan 23 19:03:53 crc kubenswrapper[4688]: E0123 19:03:53.030631 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" containerName="keystone-cron" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.030648 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" containerName="keystone-cron" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.030890 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94d940e-9cfe-4bd3-bc70-fab5a68e0f20" containerName="keystone-cron" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.032844 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.039575 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v82zs"] Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.105542 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfvj7\" (UniqueName: \"kubernetes.io/projected/1329b800-1fbb-4633-b27d-a8ddd7f059c5-kube-api-access-qfvj7\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.105606 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-catalog-content\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.105866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-utilities\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.208255 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-utilities\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.208466 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfvj7\" (UniqueName: \"kubernetes.io/projected/1329b800-1fbb-4633-b27d-a8ddd7f059c5-kube-api-access-qfvj7\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.208509 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-catalog-content\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.209068 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-catalog-content\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.209345 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-utilities\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.228990 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfvj7\" (UniqueName: \"kubernetes.io/projected/1329b800-1fbb-4633-b27d-a8ddd7f059c5-kube-api-access-qfvj7\") pod \"redhat-operators-v82zs\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.361894 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:03:53 crc kubenswrapper[4688]: I0123 19:03:53.918442 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v82zs"] Jan 23 19:03:54 crc kubenswrapper[4688]: I0123 19:03:54.566241 4688 generic.go:334] "Generic (PLEG): container finished" podID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerID="82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7" exitCode=0 Jan 23 19:03:54 crc kubenswrapper[4688]: I0123 19:03:54.566312 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v82zs" event={"ID":"1329b800-1fbb-4633-b27d-a8ddd7f059c5","Type":"ContainerDied","Data":"82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7"} Jan 23 19:03:54 crc kubenswrapper[4688]: I0123 19:03:54.566698 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v82zs" event={"ID":"1329b800-1fbb-4633-b27d-a8ddd7f059c5","Type":"ContainerStarted","Data":"32bd3f6a5630b52622a542aeabb1a5ba3b0ce755571f739579c29d9b2de40d07"} Jan 23 19:03:54 crc kubenswrapper[4688]: I0123 19:03:54.568301 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:03:55 crc kubenswrapper[4688]: I0123 19:03:55.577058 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v82zs" event={"ID":"1329b800-1fbb-4633-b27d-a8ddd7f059c5","Type":"ContainerStarted","Data":"0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb"} Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.020811 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-56vt6"] Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.023344 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.048519 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56vt6"] Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.095065 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvd5j\" (UniqueName: \"kubernetes.io/projected/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-kube-api-access-xvd5j\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.095126 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-utilities\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.095158 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-catalog-content\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.196987 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvd5j\" (UniqueName: \"kubernetes.io/projected/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-kube-api-access-xvd5j\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.197080 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-utilities\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.197126 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-catalog-content\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.197733 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-catalog-content\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.197799 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-utilities\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.220034 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvd5j\" (UniqueName: \"kubernetes.io/projected/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-kube-api-access-xvd5j\") pod \"certified-operators-56vt6\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.352950 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:03:57 crc kubenswrapper[4688]: I0123 19:03:57.953296 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56vt6"] Jan 23 19:03:58 crc kubenswrapper[4688]: I0123 19:03:58.633942 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56vt6" event={"ID":"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d","Type":"ContainerStarted","Data":"2b2ed50471e350437e697513f6dc935ec1f3cf38379fc77d96c10492aaabec2a"} Jan 23 19:03:59 crc kubenswrapper[4688]: I0123 19:03:59.654253 4688 generic.go:334] "Generic (PLEG): container finished" podID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerID="0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb" exitCode=0 Jan 23 19:03:59 crc kubenswrapper[4688]: I0123 19:03:59.654303 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v82zs" event={"ID":"1329b800-1fbb-4633-b27d-a8ddd7f059c5","Type":"ContainerDied","Data":"0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb"} Jan 23 19:03:59 crc kubenswrapper[4688]: I0123 19:03:59.660343 4688 generic.go:334] "Generic (PLEG): container finished" podID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerID="41d6faa51046b480af41a00b47453dc4746a38e0f937c40bbbd6eb4c8376b73b" exitCode=0 Jan 23 19:03:59 crc kubenswrapper[4688]: I0123 19:03:59.660387 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56vt6" event={"ID":"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d","Type":"ContainerDied","Data":"41d6faa51046b480af41a00b47453dc4746a38e0f937c40bbbd6eb4c8376b73b"} Jan 23 19:04:02 crc kubenswrapper[4688]: I0123 19:04:02.691520 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v82zs" event={"ID":"1329b800-1fbb-4633-b27d-a8ddd7f059c5","Type":"ContainerStarted","Data":"20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529"} Jan 23 19:04:02 crc kubenswrapper[4688]: I0123 19:04:02.693876 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56vt6" event={"ID":"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d","Type":"ContainerStarted","Data":"bbb5d21f8c0e51a94d527c63fd2158693888787320a37bdd061e164d25229543"} Jan 23 19:04:02 crc kubenswrapper[4688]: I0123 19:04:02.720144 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v82zs" podStartSLOduration=2.814863796 podStartE2EDuration="9.720120426s" podCreationTimestamp="2026-01-23 19:03:53 +0000 UTC" firstStartedPulling="2026-01-23 19:03:54.567986058 +0000 UTC m=+3429.563810499" lastFinishedPulling="2026-01-23 19:04:01.473242688 +0000 UTC m=+3436.469067129" observedRunningTime="2026-01-23 19:04:02.712739834 +0000 UTC m=+3437.708564275" watchObservedRunningTime="2026-01-23 19:04:02.720120426 +0000 UTC m=+3437.715944877" Jan 23 19:04:03 crc kubenswrapper[4688]: I0123 19:04:03.688346 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:04:03 crc kubenswrapper[4688]: E0123 19:04:03.689091 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:04:03 crc kubenswrapper[4688]: I0123 19:04:03.706412 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:04:03 crc kubenswrapper[4688]: I0123 19:04:03.706474 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:04:03 crc kubenswrapper[4688]: I0123 19:04:03.713027 4688 generic.go:334] "Generic (PLEG): container finished" podID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerID="bbb5d21f8c0e51a94d527c63fd2158693888787320a37bdd061e164d25229543" exitCode=0 Jan 23 19:04:03 crc kubenswrapper[4688]: I0123 19:04:03.714255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56vt6" event={"ID":"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d","Type":"ContainerDied","Data":"bbb5d21f8c0e51a94d527c63fd2158693888787320a37bdd061e164d25229543"} Jan 23 19:04:04 crc kubenswrapper[4688]: I0123 19:04:04.724114 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56vt6" event={"ID":"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d","Type":"ContainerStarted","Data":"3e359d2360206cec2c30bddbc314bd75df8e774d1d253a09783e7962db9d75c1"} Jan 23 19:04:04 crc kubenswrapper[4688]: I0123 19:04:04.748097 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v82zs" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="registry-server" probeResult="failure" output=< Jan 23 19:04:04 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:04:04 crc kubenswrapper[4688]: > Jan 23 19:04:04 crc kubenswrapper[4688]: I0123 19:04:04.748945 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-56vt6" podStartSLOduration=4.285012555 podStartE2EDuration="8.748871675s" podCreationTimestamp="2026-01-23 19:03:56 +0000 UTC" firstStartedPulling="2026-01-23 19:03:59.662204819 +0000 UTC m=+3434.658029260" lastFinishedPulling="2026-01-23 19:04:04.126063939 +0000 UTC m=+3439.121888380" observedRunningTime="2026-01-23 19:04:04.743891272 +0000 UTC m=+3439.739715713" watchObservedRunningTime="2026-01-23 19:04:04.748871675 +0000 UTC m=+3439.744696126" Jan 23 19:04:07 crc kubenswrapper[4688]: I0123 19:04:07.353988 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:04:07 crc kubenswrapper[4688]: I0123 19:04:07.354042 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:04:08 crc kubenswrapper[4688]: I0123 19:04:08.437076 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-56vt6" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="registry-server" probeResult="failure" output=< Jan 23 19:04:08 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:04:08 crc kubenswrapper[4688]: > Jan 23 19:04:14 crc kubenswrapper[4688]: I0123 19:04:14.413982 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v82zs" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="registry-server" probeResult="failure" output=< Jan 23 19:04:14 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:04:14 crc kubenswrapper[4688]: > Jan 23 19:04:17 crc kubenswrapper[4688]: I0123 19:04:17.404397 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:04:17 crc kubenswrapper[4688]: I0123 19:04:17.500405 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:04:17 crc kubenswrapper[4688]: I0123 19:04:17.700004 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56vt6"] Jan 23 19:04:18 crc kubenswrapper[4688]: I0123 19:04:18.871847 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-56vt6" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="registry-server" containerID="cri-o://3e359d2360206cec2c30bddbc314bd75df8e774d1d253a09783e7962db9d75c1" gracePeriod=2 Jan 23 19:04:19 crc kubenswrapper[4688]: I0123 19:04:19.356847 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:04:19 crc kubenswrapper[4688]: I0123 19:04:19.883520 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"f3449262ef42f408f3b4cbe6d08bf2cd71b2b3fa03ba60177eb8cdf5394b8959"} Jan 23 19:04:19 crc kubenswrapper[4688]: I0123 19:04:19.886159 4688 generic.go:334] "Generic (PLEG): container finished" podID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerID="3e359d2360206cec2c30bddbc314bd75df8e774d1d253a09783e7962db9d75c1" exitCode=0 Jan 23 19:04:19 crc kubenswrapper[4688]: I0123 19:04:19.886217 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56vt6" event={"ID":"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d","Type":"ContainerDied","Data":"3e359d2360206cec2c30bddbc314bd75df8e774d1d253a09783e7962db9d75c1"} Jan 23 19:04:19 crc kubenswrapper[4688]: I0123 19:04:19.886255 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56vt6" event={"ID":"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d","Type":"ContainerDied","Data":"2b2ed50471e350437e697513f6dc935ec1f3cf38379fc77d96c10492aaabec2a"} Jan 23 19:04:19 crc kubenswrapper[4688]: I0123 19:04:19.886283 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b2ed50471e350437e697513f6dc935ec1f3cf38379fc77d96c10492aaabec2a" Jan 23 19:04:19 crc kubenswrapper[4688]: I0123 19:04:19.967713 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.015077 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-utilities\") pod \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.015256 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvd5j\" (UniqueName: \"kubernetes.io/projected/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-kube-api-access-xvd5j\") pod \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.015275 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-catalog-content\") pod \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\" (UID: \"dbe2d679-c5ec-482f-b1b9-dfe49ee7245d\") " Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.016619 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-utilities" (OuterVolumeSpecName: "utilities") pod "dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" (UID: "dbe2d679-c5ec-482f-b1b9-dfe49ee7245d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.025196 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-kube-api-access-xvd5j" (OuterVolumeSpecName: "kube-api-access-xvd5j") pod "dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" (UID: "dbe2d679-c5ec-482f-b1b9-dfe49ee7245d"). InnerVolumeSpecName "kube-api-access-xvd5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.067935 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" (UID: "dbe2d679-c5ec-482f-b1b9-dfe49ee7245d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.119307 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvd5j\" (UniqueName: \"kubernetes.io/projected/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-kube-api-access-xvd5j\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.119355 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.119368 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.894840 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56vt6" Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.930714 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56vt6"] Jan 23 19:04:20 crc kubenswrapper[4688]: I0123 19:04:20.943556 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-56vt6"] Jan 23 19:04:21 crc kubenswrapper[4688]: I0123 19:04:21.368455 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" path="/var/lib/kubelet/pods/dbe2d679-c5ec-482f-b1b9-dfe49ee7245d/volumes" Jan 23 19:04:23 crc kubenswrapper[4688]: I0123 19:04:23.437601 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:04:23 crc kubenswrapper[4688]: I0123 19:04:23.498247 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:04:24 crc kubenswrapper[4688]: I0123 19:04:24.091573 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v82zs"] Jan 23 19:04:24 crc kubenswrapper[4688]: I0123 19:04:24.932374 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v82zs" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="registry-server" containerID="cri-o://20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529" gracePeriod=2 Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.520664 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.589668 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-utilities\") pod \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.590056 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-catalog-content\") pod \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.590102 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfvj7\" (UniqueName: \"kubernetes.io/projected/1329b800-1fbb-4633-b27d-a8ddd7f059c5-kube-api-access-qfvj7\") pod \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\" (UID: \"1329b800-1fbb-4633-b27d-a8ddd7f059c5\") " Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.591646 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-utilities" (OuterVolumeSpecName: "utilities") pod "1329b800-1fbb-4633-b27d-a8ddd7f059c5" (UID: "1329b800-1fbb-4633-b27d-a8ddd7f059c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.601565 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1329b800-1fbb-4633-b27d-a8ddd7f059c5-kube-api-access-qfvj7" (OuterVolumeSpecName: "kube-api-access-qfvj7") pod "1329b800-1fbb-4633-b27d-a8ddd7f059c5" (UID: "1329b800-1fbb-4633-b27d-a8ddd7f059c5"). InnerVolumeSpecName "kube-api-access-qfvj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.692693 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfvj7\" (UniqueName: \"kubernetes.io/projected/1329b800-1fbb-4633-b27d-a8ddd7f059c5-kube-api-access-qfvj7\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.692741 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.735250 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1329b800-1fbb-4633-b27d-a8ddd7f059c5" (UID: "1329b800-1fbb-4633-b27d-a8ddd7f059c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.794471 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1329b800-1fbb-4633-b27d-a8ddd7f059c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.944615 4688 generic.go:334] "Generic (PLEG): container finished" podID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerID="20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529" exitCode=0 Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.944712 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v82zs" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.944712 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v82zs" event={"ID":"1329b800-1fbb-4633-b27d-a8ddd7f059c5","Type":"ContainerDied","Data":"20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529"} Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.946178 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v82zs" event={"ID":"1329b800-1fbb-4633-b27d-a8ddd7f059c5","Type":"ContainerDied","Data":"32bd3f6a5630b52622a542aeabb1a5ba3b0ce755571f739579c29d9b2de40d07"} Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.946231 4688 scope.go:117] "RemoveContainer" containerID="20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.980406 4688 scope.go:117] "RemoveContainer" containerID="0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb" Jan 23 19:04:25 crc kubenswrapper[4688]: I0123 19:04:25.994237 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v82zs"] Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.003411 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v82zs"] Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.007515 4688 scope.go:117] "RemoveContainer" containerID="82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.058566 4688 scope.go:117] "RemoveContainer" containerID="20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.059014 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529\": container with ID starting with 20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529 not found: ID does not exist" containerID="20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.059066 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529"} err="failed to get container status \"20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529\": rpc error: code = NotFound desc = could not find container \"20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529\": container with ID starting with 20e85fb7143052d5f682e02a43690bb4c515b2a7a8653352136f3f77585ee529 not found: ID does not exist" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.059095 4688 scope.go:117] "RemoveContainer" containerID="0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.059388 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb\": container with ID starting with 0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb not found: ID does not exist" containerID="0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.059414 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb"} err="failed to get container status \"0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb\": rpc error: code = NotFound desc = could not find container \"0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb\": container with ID starting with 0eca8bcee9e4686060e635235869058603d39fecba13820b2e6e9ad808f06dfb not found: ID does not exist" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.059432 4688 scope.go:117] "RemoveContainer" containerID="82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.059662 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7\": container with ID starting with 82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7 not found: ID does not exist" containerID="82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.059689 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7"} err="failed to get container status \"82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7\": rpc error: code = NotFound desc = could not find container \"82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7\": container with ID starting with 82e413994aad1e34e9dae4f7aee89fe8aad38ea9bc990b018bf5509da2c88bd7 not found: ID does not exist" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.699155 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g7hzb"] Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.702341 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="registry-server" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702374 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="registry-server" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.702399 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="registry-server" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702408 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="registry-server" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.702424 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="extract-content" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702434 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="extract-content" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.702467 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="extract-content" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702477 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="extract-content" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.702491 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="extract-utilities" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702499 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="extract-utilities" Jan 23 19:04:26 crc kubenswrapper[4688]: E0123 19:04:26.702515 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="extract-utilities" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702522 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="extract-utilities" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702748 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" containerName="registry-server" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.702771 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe2d679-c5ec-482f-b1b9-dfe49ee7245d" containerName="registry-server" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.704547 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.710870 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7hzb"] Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.713455 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4kqc\" (UniqueName: \"kubernetes.io/projected/19157819-6ca8-48bf-8718-6c54705573e0-kube-api-access-h4kqc\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.713534 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-catalog-content\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.713729 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-utilities\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.814839 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4kqc\" (UniqueName: \"kubernetes.io/projected/19157819-6ca8-48bf-8718-6c54705573e0-kube-api-access-h4kqc\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.814909 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-catalog-content\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.814967 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-utilities\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.815433 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-catalog-content\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.815495 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-utilities\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:26 crc kubenswrapper[4688]: I0123 19:04:26.833793 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4kqc\" (UniqueName: \"kubernetes.io/projected/19157819-6ca8-48bf-8718-6c54705573e0-kube-api-access-h4kqc\") pod \"redhat-marketplace-g7hzb\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:27 crc kubenswrapper[4688]: I0123 19:04:27.023805 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:27 crc kubenswrapper[4688]: I0123 19:04:27.376772 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1329b800-1fbb-4633-b27d-a8ddd7f059c5" path="/var/lib/kubelet/pods/1329b800-1fbb-4633-b27d-a8ddd7f059c5/volumes" Jan 23 19:04:27 crc kubenswrapper[4688]: I0123 19:04:27.514292 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7hzb"] Jan 23 19:04:27 crc kubenswrapper[4688]: I0123 19:04:27.969850 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7hzb" event={"ID":"19157819-6ca8-48bf-8718-6c54705573e0","Type":"ContainerStarted","Data":"a2155fa7bed13605199f4243a79045dfa891e81715faba27cfc4374e73132fae"} Jan 23 19:04:28 crc kubenswrapper[4688]: I0123 19:04:28.987250 4688 generic.go:334] "Generic (PLEG): container finished" podID="19157819-6ca8-48bf-8718-6c54705573e0" containerID="1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023" exitCode=0 Jan 23 19:04:28 crc kubenswrapper[4688]: I0123 19:04:28.987323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7hzb" event={"ID":"19157819-6ca8-48bf-8718-6c54705573e0","Type":"ContainerDied","Data":"1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023"} Jan 23 19:04:32 crc kubenswrapper[4688]: I0123 19:04:32.020509 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7hzb" event={"ID":"19157819-6ca8-48bf-8718-6c54705573e0","Type":"ContainerStarted","Data":"1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42"} Jan 23 19:04:33 crc kubenswrapper[4688]: I0123 19:04:33.030367 4688 generic.go:334] "Generic (PLEG): container finished" podID="19157819-6ca8-48bf-8718-6c54705573e0" containerID="1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42" exitCode=0 Jan 23 19:04:33 crc kubenswrapper[4688]: I0123 19:04:33.030416 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7hzb" event={"ID":"19157819-6ca8-48bf-8718-6c54705573e0","Type":"ContainerDied","Data":"1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42"} Jan 23 19:04:35 crc kubenswrapper[4688]: I0123 19:04:35.050454 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7hzb" event={"ID":"19157819-6ca8-48bf-8718-6c54705573e0","Type":"ContainerStarted","Data":"6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f"} Jan 23 19:04:35 crc kubenswrapper[4688]: I0123 19:04:35.082164 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g7hzb" podStartSLOduration=4.021080838 podStartE2EDuration="9.082139758s" podCreationTimestamp="2026-01-23 19:04:26 +0000 UTC" firstStartedPulling="2026-01-23 19:04:28.99263056 +0000 UTC m=+3463.988455001" lastFinishedPulling="2026-01-23 19:04:34.05368948 +0000 UTC m=+3469.049513921" observedRunningTime="2026-01-23 19:04:35.070776032 +0000 UTC m=+3470.066600483" watchObservedRunningTime="2026-01-23 19:04:35.082139758 +0000 UTC m=+3470.077964209" Jan 23 19:04:37 crc kubenswrapper[4688]: I0123 19:04:37.024620 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:37 crc kubenswrapper[4688]: I0123 19:04:37.024707 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:37 crc kubenswrapper[4688]: I0123 19:04:37.080456 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:47 crc kubenswrapper[4688]: I0123 19:04:47.077525 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:47 crc kubenswrapper[4688]: I0123 19:04:47.186334 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7hzb"] Jan 23 19:04:47 crc kubenswrapper[4688]: I0123 19:04:47.186634 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g7hzb" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="registry-server" containerID="cri-o://6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f" gracePeriod=2 Jan 23 19:04:47 crc kubenswrapper[4688]: I0123 19:04:47.861099 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.006551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-catalog-content\") pod \"19157819-6ca8-48bf-8718-6c54705573e0\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.006622 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-utilities\") pod \"19157819-6ca8-48bf-8718-6c54705573e0\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.006716 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4kqc\" (UniqueName: \"kubernetes.io/projected/19157819-6ca8-48bf-8718-6c54705573e0-kube-api-access-h4kqc\") pod \"19157819-6ca8-48bf-8718-6c54705573e0\" (UID: \"19157819-6ca8-48bf-8718-6c54705573e0\") " Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.007701 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-utilities" (OuterVolumeSpecName: "utilities") pod "19157819-6ca8-48bf-8718-6c54705573e0" (UID: "19157819-6ca8-48bf-8718-6c54705573e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.012201 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19157819-6ca8-48bf-8718-6c54705573e0-kube-api-access-h4kqc" (OuterVolumeSpecName: "kube-api-access-h4kqc") pod "19157819-6ca8-48bf-8718-6c54705573e0" (UID: "19157819-6ca8-48bf-8718-6c54705573e0"). InnerVolumeSpecName "kube-api-access-h4kqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.030422 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19157819-6ca8-48bf-8718-6c54705573e0" (UID: "19157819-6ca8-48bf-8718-6c54705573e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.109067 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4kqc\" (UniqueName: \"kubernetes.io/projected/19157819-6ca8-48bf-8718-6c54705573e0-kube-api-access-h4kqc\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.109115 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.109126 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19157819-6ca8-48bf-8718-6c54705573e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.177602 4688 generic.go:334] "Generic (PLEG): container finished" podID="19157819-6ca8-48bf-8718-6c54705573e0" containerID="6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f" exitCode=0 Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.177657 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g7hzb" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.177654 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7hzb" event={"ID":"19157819-6ca8-48bf-8718-6c54705573e0","Type":"ContainerDied","Data":"6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f"} Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.177774 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g7hzb" event={"ID":"19157819-6ca8-48bf-8718-6c54705573e0","Type":"ContainerDied","Data":"a2155fa7bed13605199f4243a79045dfa891e81715faba27cfc4374e73132fae"} Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.177799 4688 scope.go:117] "RemoveContainer" containerID="6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.208657 4688 scope.go:117] "RemoveContainer" containerID="1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.213554 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7hzb"] Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.225005 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g7hzb"] Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.233815 4688 scope.go:117] "RemoveContainer" containerID="1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.281211 4688 scope.go:117] "RemoveContainer" containerID="6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f" Jan 23 19:04:48 crc kubenswrapper[4688]: E0123 19:04:48.281659 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f\": container with ID starting with 6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f not found: ID does not exist" containerID="6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.281695 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f"} err="failed to get container status \"6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f\": rpc error: code = NotFound desc = could not find container \"6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f\": container with ID starting with 6197f69ec52d0dba63d24372b432fa4e17f246e9f7708325400fb2d7deaef74f not found: ID does not exist" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.281721 4688 scope.go:117] "RemoveContainer" containerID="1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42" Jan 23 19:04:48 crc kubenswrapper[4688]: E0123 19:04:48.282169 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42\": container with ID starting with 1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42 not found: ID does not exist" containerID="1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.282299 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42"} err="failed to get container status \"1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42\": rpc error: code = NotFound desc = could not find container \"1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42\": container with ID starting with 1dd3355008eac5cd55770760d3150b5355b48d504ed79c63259537be1071ac42 not found: ID does not exist" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.282393 4688 scope.go:117] "RemoveContainer" containerID="1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023" Jan 23 19:04:48 crc kubenswrapper[4688]: E0123 19:04:48.282752 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023\": container with ID starting with 1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023 not found: ID does not exist" containerID="1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023" Jan 23 19:04:48 crc kubenswrapper[4688]: I0123 19:04:48.282781 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023"} err="failed to get container status \"1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023\": rpc error: code = NotFound desc = could not find container \"1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023\": container with ID starting with 1c595490a41883bf8bdc0d3e48935bd786a5d0ad57a2fa6ea2f72c622b800023 not found: ID does not exist" Jan 23 19:04:49 crc kubenswrapper[4688]: I0123 19:04:49.368557 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19157819-6ca8-48bf-8718-6c54705573e0" path="/var/lib/kubelet/pods/19157819-6ca8-48bf-8718-6c54705573e0/volumes" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.766596 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xcmsd"] Jan 23 19:04:50 crc kubenswrapper[4688]: E0123 19:04:50.768119 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="registry-server" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.768245 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="registry-server" Jan 23 19:04:50 crc kubenswrapper[4688]: E0123 19:04:50.768330 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="extract-content" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.768422 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="extract-content" Jan 23 19:04:50 crc kubenswrapper[4688]: E0123 19:04:50.768506 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="extract-utilities" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.768560 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="extract-utilities" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.768820 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="19157819-6ca8-48bf-8718-6c54705573e0" containerName="registry-server" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.770564 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.793054 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xcmsd"] Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.866740 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28z7b\" (UniqueName: \"kubernetes.io/projected/dcacd651-9043-4f82-9813-ad259868bd67-kube-api-access-28z7b\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.867139 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-catalog-content\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.867255 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-utilities\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.969040 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-catalog-content\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.969114 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-utilities\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.969256 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28z7b\" (UniqueName: \"kubernetes.io/projected/dcacd651-9043-4f82-9813-ad259868bd67-kube-api-access-28z7b\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.969550 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-catalog-content\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:50 crc kubenswrapper[4688]: I0123 19:04:50.969873 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-utilities\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:51 crc kubenswrapper[4688]: I0123 19:04:51.001504 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28z7b\" (UniqueName: \"kubernetes.io/projected/dcacd651-9043-4f82-9813-ad259868bd67-kube-api-access-28z7b\") pod \"community-operators-xcmsd\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:51 crc kubenswrapper[4688]: I0123 19:04:51.101352 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:04:51 crc kubenswrapper[4688]: I0123 19:04:51.639359 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xcmsd"] Jan 23 19:04:52 crc kubenswrapper[4688]: I0123 19:04:52.244911 4688 generic.go:334] "Generic (PLEG): container finished" podID="dcacd651-9043-4f82-9813-ad259868bd67" containerID="1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba" exitCode=0 Jan 23 19:04:52 crc kubenswrapper[4688]: I0123 19:04:52.245044 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcmsd" event={"ID":"dcacd651-9043-4f82-9813-ad259868bd67","Type":"ContainerDied","Data":"1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba"} Jan 23 19:04:52 crc kubenswrapper[4688]: I0123 19:04:52.245284 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcmsd" event={"ID":"dcacd651-9043-4f82-9813-ad259868bd67","Type":"ContainerStarted","Data":"b2176df8207d4a55528fbdf270bbbe5adf917bc89a6def953028d4adc7e29645"} Jan 23 19:04:57 crc kubenswrapper[4688]: I0123 19:04:57.317633 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcmsd" event={"ID":"dcacd651-9043-4f82-9813-ad259868bd67","Type":"ContainerStarted","Data":"4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e"} Jan 23 19:04:58 crc kubenswrapper[4688]: I0123 19:04:58.329353 4688 generic.go:334] "Generic (PLEG): container finished" podID="dcacd651-9043-4f82-9813-ad259868bd67" containerID="4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e" exitCode=0 Jan 23 19:04:58 crc kubenswrapper[4688]: I0123 19:04:58.329457 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcmsd" event={"ID":"dcacd651-9043-4f82-9813-ad259868bd67","Type":"ContainerDied","Data":"4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e"} Jan 23 19:04:59 crc kubenswrapper[4688]: I0123 19:04:59.343931 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcmsd" event={"ID":"dcacd651-9043-4f82-9813-ad259868bd67","Type":"ContainerStarted","Data":"bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5"} Jan 23 19:04:59 crc kubenswrapper[4688]: I0123 19:04:59.376723 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xcmsd" podStartSLOduration=2.752133225 podStartE2EDuration="9.376699941s" podCreationTimestamp="2026-01-23 19:04:50 +0000 UTC" firstStartedPulling="2026-01-23 19:04:52.250060655 +0000 UTC m=+3487.245885096" lastFinishedPulling="2026-01-23 19:04:58.874627371 +0000 UTC m=+3493.870451812" observedRunningTime="2026-01-23 19:04:59.359854738 +0000 UTC m=+3494.355679209" watchObservedRunningTime="2026-01-23 19:04:59.376699941 +0000 UTC m=+3494.372524382" Jan 23 19:05:01 crc kubenswrapper[4688]: I0123 19:05:01.102525 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:05:01 crc kubenswrapper[4688]: I0123 19:05:01.103069 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:05:01 crc kubenswrapper[4688]: I0123 19:05:01.159550 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:05:11 crc kubenswrapper[4688]: I0123 19:05:11.154216 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:05:11 crc kubenswrapper[4688]: I0123 19:05:11.216533 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xcmsd"] Jan 23 19:05:11 crc kubenswrapper[4688]: I0123 19:05:11.467990 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xcmsd" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="registry-server" containerID="cri-o://bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5" gracePeriod=2 Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.084649 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.195948 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-utilities\") pod \"dcacd651-9043-4f82-9813-ad259868bd67\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.196102 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28z7b\" (UniqueName: \"kubernetes.io/projected/dcacd651-9043-4f82-9813-ad259868bd67-kube-api-access-28z7b\") pod \"dcacd651-9043-4f82-9813-ad259868bd67\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.196286 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-catalog-content\") pod \"dcacd651-9043-4f82-9813-ad259868bd67\" (UID: \"dcacd651-9043-4f82-9813-ad259868bd67\") " Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.196827 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-utilities" (OuterVolumeSpecName: "utilities") pod "dcacd651-9043-4f82-9813-ad259868bd67" (UID: "dcacd651-9043-4f82-9813-ad259868bd67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.197755 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.202411 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcacd651-9043-4f82-9813-ad259868bd67-kube-api-access-28z7b" (OuterVolumeSpecName: "kube-api-access-28z7b") pod "dcacd651-9043-4f82-9813-ad259868bd67" (UID: "dcacd651-9043-4f82-9813-ad259868bd67"). InnerVolumeSpecName "kube-api-access-28z7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.265009 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcacd651-9043-4f82-9813-ad259868bd67" (UID: "dcacd651-9043-4f82-9813-ad259868bd67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.299935 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28z7b\" (UniqueName: \"kubernetes.io/projected/dcacd651-9043-4f82-9813-ad259868bd67-kube-api-access-28z7b\") on node \"crc\" DevicePath \"\"" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.299973 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcacd651-9043-4f82-9813-ad259868bd67-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.479143 4688 generic.go:334] "Generic (PLEG): container finished" podID="dcacd651-9043-4f82-9813-ad259868bd67" containerID="bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5" exitCode=0 Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.479206 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcmsd" event={"ID":"dcacd651-9043-4f82-9813-ad259868bd67","Type":"ContainerDied","Data":"bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5"} Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.479230 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcmsd" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.479249 4688 scope.go:117] "RemoveContainer" containerID="bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.479239 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcmsd" event={"ID":"dcacd651-9043-4f82-9813-ad259868bd67","Type":"ContainerDied","Data":"b2176df8207d4a55528fbdf270bbbe5adf917bc89a6def953028d4adc7e29645"} Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.503854 4688 scope.go:117] "RemoveContainer" containerID="4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.517783 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xcmsd"] Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.529542 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xcmsd"] Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.543978 4688 scope.go:117] "RemoveContainer" containerID="1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.584848 4688 scope.go:117] "RemoveContainer" containerID="bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5" Jan 23 19:05:12 crc kubenswrapper[4688]: E0123 19:05:12.586291 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5\": container with ID starting with bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5 not found: ID does not exist" containerID="bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.586348 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5"} err="failed to get container status \"bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5\": rpc error: code = NotFound desc = could not find container \"bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5\": container with ID starting with bb4f653e092d544c6cfd82dcec22bb54c04b643456d91a89f34af04662d912d5 not found: ID does not exist" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.586383 4688 scope.go:117] "RemoveContainer" containerID="4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e" Jan 23 19:05:12 crc kubenswrapper[4688]: E0123 19:05:12.586848 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e\": container with ID starting with 4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e not found: ID does not exist" containerID="4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.586888 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e"} err="failed to get container status \"4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e\": rpc error: code = NotFound desc = could not find container \"4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e\": container with ID starting with 4a51e42c2a97d41c586fce209201127cfd6ca1198ddf42976f988c5b69ee4a2e not found: ID does not exist" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.586911 4688 scope.go:117] "RemoveContainer" containerID="1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba" Jan 23 19:05:12 crc kubenswrapper[4688]: E0123 19:05:12.587213 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba\": container with ID starting with 1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba not found: ID does not exist" containerID="1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba" Jan 23 19:05:12 crc kubenswrapper[4688]: I0123 19:05:12.587255 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba"} err="failed to get container status \"1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba\": rpc error: code = NotFound desc = could not find container \"1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba\": container with ID starting with 1c909e9950e7b08a1544926aeda9b85535f4b21dd941bed93dc0c8984d8499ba not found: ID does not exist" Jan 23 19:05:13 crc kubenswrapper[4688]: I0123 19:05:13.381356 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcacd651-9043-4f82-9813-ad259868bd67" path="/var/lib/kubelet/pods/dcacd651-9043-4f82-9813-ad259868bd67/volumes" Jan 23 19:06:04 crc kubenswrapper[4688]: E0123 19:06:04.276729 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.213:36916->38.129.56.213:41963: write tcp 38.129.56.213:36916->38.129.56.213:41963: write: broken pipe Jan 23 19:06:36 crc kubenswrapper[4688]: I0123 19:06:36.965674 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:06:36 crc kubenswrapper[4688]: I0123 19:06:36.967146 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:07:06 crc kubenswrapper[4688]: I0123 19:07:06.965516 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:07:06 crc kubenswrapper[4688]: I0123 19:07:06.966073 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:07:36 crc kubenswrapper[4688]: I0123 19:07:36.965402 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:07:36 crc kubenswrapper[4688]: I0123 19:07:36.965991 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:07:36 crc kubenswrapper[4688]: I0123 19:07:36.966052 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:07:36 crc kubenswrapper[4688]: I0123 19:07:36.967001 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3449262ef42f408f3b4cbe6d08bf2cd71b2b3fa03ba60177eb8cdf5394b8959"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:07:36 crc kubenswrapper[4688]: I0123 19:07:36.967071 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://f3449262ef42f408f3b4cbe6d08bf2cd71b2b3fa03ba60177eb8cdf5394b8959" gracePeriod=600 Jan 23 19:07:38 crc kubenswrapper[4688]: I0123 19:07:38.024686 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="f3449262ef42f408f3b4cbe6d08bf2cd71b2b3fa03ba60177eb8cdf5394b8959" exitCode=0 Jan 23 19:07:38 crc kubenswrapper[4688]: I0123 19:07:38.024769 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"f3449262ef42f408f3b4cbe6d08bf2cd71b2b3fa03ba60177eb8cdf5394b8959"} Jan 23 19:07:38 crc kubenswrapper[4688]: I0123 19:07:38.025297 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8"} Jan 23 19:07:38 crc kubenswrapper[4688]: I0123 19:07:38.025325 4688 scope.go:117] "RemoveContainer" containerID="31b25aa7053481a93395681cbf3c1a0db722b1deb667f824eb617d45817b18a5" Jan 23 19:09:58 crc kubenswrapper[4688]: I0123 19:09:58.366979 4688 scope.go:117] "RemoveContainer" containerID="41d6faa51046b480af41a00b47453dc4746a38e0f937c40bbbd6eb4c8376b73b" Jan 23 19:10:06 crc kubenswrapper[4688]: I0123 19:10:06.965871 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:10:06 crc kubenswrapper[4688]: I0123 19:10:06.966532 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:10:36 crc kubenswrapper[4688]: I0123 19:10:36.965505 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:10:36 crc kubenswrapper[4688]: I0123 19:10:36.966104 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:10:58 crc kubenswrapper[4688]: I0123 19:10:58.411404 4688 scope.go:117] "RemoveContainer" containerID="bbb5d21f8c0e51a94d527c63fd2158693888787320a37bdd061e164d25229543" Jan 23 19:10:58 crc kubenswrapper[4688]: I0123 19:10:58.443926 4688 scope.go:117] "RemoveContainer" containerID="3e359d2360206cec2c30bddbc314bd75df8e774d1d253a09783e7962db9d75c1" Jan 23 19:11:06 crc kubenswrapper[4688]: I0123 19:11:06.965219 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:11:06 crc kubenswrapper[4688]: I0123 19:11:06.965796 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:11:06 crc kubenswrapper[4688]: I0123 19:11:06.965890 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:11:06 crc kubenswrapper[4688]: I0123 19:11:06.966795 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:11:06 crc kubenswrapper[4688]: I0123 19:11:06.966862 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" gracePeriod=600 Jan 23 19:11:07 crc kubenswrapper[4688]: E0123 19:11:07.097950 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:11:07 crc kubenswrapper[4688]: I0123 19:11:07.221983 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" exitCode=0 Jan 23 19:11:07 crc kubenswrapper[4688]: I0123 19:11:07.222132 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8"} Jan 23 19:11:07 crc kubenswrapper[4688]: I0123 19:11:07.222213 4688 scope.go:117] "RemoveContainer" containerID="f3449262ef42f408f3b4cbe6d08bf2cd71b2b3fa03ba60177eb8cdf5394b8959" Jan 23 19:11:07 crc kubenswrapper[4688]: I0123 19:11:07.223410 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:11:07 crc kubenswrapper[4688]: E0123 19:11:07.223780 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:11:22 crc kubenswrapper[4688]: I0123 19:11:22.356723 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:11:22 crc kubenswrapper[4688]: E0123 19:11:22.357605 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:11:23 crc kubenswrapper[4688]: I0123 19:11:23.159823 4688 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4m5tx container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 19:11:23 crc kubenswrapper[4688]: I0123 19:11:23.159956 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" podUID="4bc9750e-684a-4163-85c7-328d7a64ac9b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 19:11:23 crc kubenswrapper[4688]: I0123 19:11:23.160625 4688 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-4m5tx container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 19:11:23 crc kubenswrapper[4688]: I0123 19:11:23.160677 4688 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4m5tx" podUID="4bc9750e-684a-4163-85c7-328d7a64ac9b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 19:11:34 crc kubenswrapper[4688]: I0123 19:11:34.356858 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:11:34 crc kubenswrapper[4688]: E0123 19:11:34.357699 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:11:48 crc kubenswrapper[4688]: I0123 19:11:48.356370 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:11:48 crc kubenswrapper[4688]: E0123 19:11:48.357134 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:12:00 crc kubenswrapper[4688]: I0123 19:12:00.357903 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:12:00 crc kubenswrapper[4688]: E0123 19:12:00.358761 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:12:12 crc kubenswrapper[4688]: I0123 19:12:12.356397 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:12:12 crc kubenswrapper[4688]: E0123 19:12:12.357365 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:12:25 crc kubenswrapper[4688]: I0123 19:12:25.365975 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:12:25 crc kubenswrapper[4688]: E0123 19:12:25.366940 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:12:36 crc kubenswrapper[4688]: I0123 19:12:36.356765 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:12:36 crc kubenswrapper[4688]: E0123 19:12:36.357561 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:12:49 crc kubenswrapper[4688]: I0123 19:12:49.356330 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:12:49 crc kubenswrapper[4688]: E0123 19:12:49.357102 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:13:02 crc kubenswrapper[4688]: I0123 19:13:02.356889 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:13:02 crc kubenswrapper[4688]: E0123 19:13:02.357774 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:13:13 crc kubenswrapper[4688]: I0123 19:13:13.357198 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:13:13 crc kubenswrapper[4688]: E0123 19:13:13.359091 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:13:28 crc kubenswrapper[4688]: I0123 19:13:28.356013 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:13:28 crc kubenswrapper[4688]: E0123 19:13:28.356954 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:13:40 crc kubenswrapper[4688]: I0123 19:13:40.356764 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:13:40 crc kubenswrapper[4688]: E0123 19:13:40.357550 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:13:52 crc kubenswrapper[4688]: I0123 19:13:52.356213 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:13:52 crc kubenswrapper[4688]: E0123 19:13:52.358176 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:14:05 crc kubenswrapper[4688]: I0123 19:14:05.365012 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:14:05 crc kubenswrapper[4688]: E0123 19:14:05.365858 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:14:17 crc kubenswrapper[4688]: I0123 19:14:17.356820 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:14:17 crc kubenswrapper[4688]: E0123 19:14:17.358260 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:14:28 crc kubenswrapper[4688]: I0123 19:14:28.356776 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:14:28 crc kubenswrapper[4688]: E0123 19:14:28.358237 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.005872 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6mhqh"] Jan 23 19:14:29 crc kubenswrapper[4688]: E0123 19:14:29.006391 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="extract-content" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.006407 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="extract-content" Jan 23 19:14:29 crc kubenswrapper[4688]: E0123 19:14:29.006430 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="registry-server" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.006437 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="registry-server" Jan 23 19:14:29 crc kubenswrapper[4688]: E0123 19:14:29.006470 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="extract-utilities" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.006477 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="extract-utilities" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.006663 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcacd651-9043-4f82-9813-ad259868bd67" containerName="registry-server" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.008125 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.029599 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mhqh"] Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.050298 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-catalog-content\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.050493 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-utilities\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.050770 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rx6k\" (UniqueName: \"kubernetes.io/projected/da305480-956a-4114-aff9-467c8044ac3b-kube-api-access-4rx6k\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.152812 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rx6k\" (UniqueName: \"kubernetes.io/projected/da305480-956a-4114-aff9-467c8044ac3b-kube-api-access-4rx6k\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.153075 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-catalog-content\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.153251 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-utilities\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.153788 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-catalog-content\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.153869 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-utilities\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.175414 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rx6k\" (UniqueName: \"kubernetes.io/projected/da305480-956a-4114-aff9-467c8044ac3b-kube-api-access-4rx6k\") pod \"certified-operators-6mhqh\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.331612 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:29 crc kubenswrapper[4688]: I0123 19:14:29.985827 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6mhqh"] Jan 23 19:14:30 crc kubenswrapper[4688]: I0123 19:14:30.410981 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mhqh" event={"ID":"da305480-956a-4114-aff9-467c8044ac3b","Type":"ContainerStarted","Data":"1de7974325e189b84472d0860df5310b7d0a649542c9e20616e37296f40808ca"} Jan 23 19:14:31 crc kubenswrapper[4688]: I0123 19:14:31.421079 4688 generic.go:334] "Generic (PLEG): container finished" podID="da305480-956a-4114-aff9-467c8044ac3b" containerID="33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71" exitCode=0 Jan 23 19:14:31 crc kubenswrapper[4688]: I0123 19:14:31.421166 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mhqh" event={"ID":"da305480-956a-4114-aff9-467c8044ac3b","Type":"ContainerDied","Data":"33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71"} Jan 23 19:14:31 crc kubenswrapper[4688]: I0123 19:14:31.423315 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:14:33 crc kubenswrapper[4688]: I0123 19:14:33.502928 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mhqh" event={"ID":"da305480-956a-4114-aff9-467c8044ac3b","Type":"ContainerStarted","Data":"568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4"} Jan 23 19:14:34 crc kubenswrapper[4688]: I0123 19:14:34.513240 4688 generic.go:334] "Generic (PLEG): container finished" podID="da305480-956a-4114-aff9-467c8044ac3b" containerID="568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4" exitCode=0 Jan 23 19:14:34 crc kubenswrapper[4688]: I0123 19:14:34.513323 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mhqh" event={"ID":"da305480-956a-4114-aff9-467c8044ac3b","Type":"ContainerDied","Data":"568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4"} Jan 23 19:14:35 crc kubenswrapper[4688]: I0123 19:14:35.524198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mhqh" event={"ID":"da305480-956a-4114-aff9-467c8044ac3b","Type":"ContainerStarted","Data":"8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd"} Jan 23 19:14:35 crc kubenswrapper[4688]: I0123 19:14:35.549159 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6mhqh" podStartSLOduration=3.849473184 podStartE2EDuration="7.549134634s" podCreationTimestamp="2026-01-23 19:14:28 +0000 UTC" firstStartedPulling="2026-01-23 19:14:31.423003754 +0000 UTC m=+4066.418828195" lastFinishedPulling="2026-01-23 19:14:35.122665204 +0000 UTC m=+4070.118489645" observedRunningTime="2026-01-23 19:14:35.543207474 +0000 UTC m=+4070.539031935" watchObservedRunningTime="2026-01-23 19:14:35.549134634 +0000 UTC m=+4070.544959075" Jan 23 19:14:39 crc kubenswrapper[4688]: I0123 19:14:39.333158 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:39 crc kubenswrapper[4688]: I0123 19:14:39.333753 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:39 crc kubenswrapper[4688]: I0123 19:14:39.357565 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:14:39 crc kubenswrapper[4688]: E0123 19:14:39.358031 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:14:39 crc kubenswrapper[4688]: I0123 19:14:39.387101 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:49 crc kubenswrapper[4688]: I0123 19:14:49.386969 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:49 crc kubenswrapper[4688]: I0123 19:14:49.477068 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6mhqh"] Jan 23 19:14:49 crc kubenswrapper[4688]: I0123 19:14:49.765205 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6mhqh" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="registry-server" containerID="cri-o://8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd" gracePeriod=2 Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.315217 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.481057 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-catalog-content\") pod \"da305480-956a-4114-aff9-467c8044ac3b\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.481229 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rx6k\" (UniqueName: \"kubernetes.io/projected/da305480-956a-4114-aff9-467c8044ac3b-kube-api-access-4rx6k\") pod \"da305480-956a-4114-aff9-467c8044ac3b\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.481314 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-utilities\") pod \"da305480-956a-4114-aff9-467c8044ac3b\" (UID: \"da305480-956a-4114-aff9-467c8044ac3b\") " Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.482552 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-utilities" (OuterVolumeSpecName: "utilities") pod "da305480-956a-4114-aff9-467c8044ac3b" (UID: "da305480-956a-4114-aff9-467c8044ac3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.491561 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da305480-956a-4114-aff9-467c8044ac3b-kube-api-access-4rx6k" (OuterVolumeSpecName: "kube-api-access-4rx6k") pod "da305480-956a-4114-aff9-467c8044ac3b" (UID: "da305480-956a-4114-aff9-467c8044ac3b"). InnerVolumeSpecName "kube-api-access-4rx6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.534768 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da305480-956a-4114-aff9-467c8044ac3b" (UID: "da305480-956a-4114-aff9-467c8044ac3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.585330 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.585767 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da305480-956a-4114-aff9-467c8044ac3b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.585795 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rx6k\" (UniqueName: \"kubernetes.io/projected/da305480-956a-4114-aff9-467c8044ac3b-kube-api-access-4rx6k\") on node \"crc\" DevicePath \"\"" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.776975 4688 generic.go:334] "Generic (PLEG): container finished" podID="da305480-956a-4114-aff9-467c8044ac3b" containerID="8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd" exitCode=0 Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.777035 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mhqh" event={"ID":"da305480-956a-4114-aff9-467c8044ac3b","Type":"ContainerDied","Data":"8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd"} Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.777069 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6mhqh" event={"ID":"da305480-956a-4114-aff9-467c8044ac3b","Type":"ContainerDied","Data":"1de7974325e189b84472d0860df5310b7d0a649542c9e20616e37296f40808ca"} Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.777073 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6mhqh" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.777088 4688 scope.go:117] "RemoveContainer" containerID="8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.799802 4688 scope.go:117] "RemoveContainer" containerID="568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.817482 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6mhqh"] Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.824456 4688 scope.go:117] "RemoveContainer" containerID="33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.827094 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6mhqh"] Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.879211 4688 scope.go:117] "RemoveContainer" containerID="8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd" Jan 23 19:14:50 crc kubenswrapper[4688]: E0123 19:14:50.879659 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd\": container with ID starting with 8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd not found: ID does not exist" containerID="8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.879706 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd"} err="failed to get container status \"8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd\": rpc error: code = NotFound desc = could not find container \"8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd\": container with ID starting with 8c03dd60b5856eb11b6de9b5efb73d8fb43c197cb55f5ce8b519be4ece72a0bd not found: ID does not exist" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.879742 4688 scope.go:117] "RemoveContainer" containerID="568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4" Jan 23 19:14:50 crc kubenswrapper[4688]: E0123 19:14:50.880065 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4\": container with ID starting with 568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4 not found: ID does not exist" containerID="568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.880106 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4"} err="failed to get container status \"568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4\": rpc error: code = NotFound desc = could not find container \"568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4\": container with ID starting with 568ccf844b880826d78bab252901aef281ce976474986004ec4ca2a408cde1a4 not found: ID does not exist" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.880134 4688 scope.go:117] "RemoveContainer" containerID="33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71" Jan 23 19:14:50 crc kubenswrapper[4688]: E0123 19:14:50.880642 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71\": container with ID starting with 33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71 not found: ID does not exist" containerID="33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71" Jan 23 19:14:50 crc kubenswrapper[4688]: I0123 19:14:50.880675 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71"} err="failed to get container status \"33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71\": rpc error: code = NotFound desc = could not find container \"33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71\": container with ID starting with 33c27896b4413456be08f1732dcd70d81c1e89bb37ceaea9f9c326285caf3d71 not found: ID does not exist" Jan 23 19:14:51 crc kubenswrapper[4688]: I0123 19:14:51.356941 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:14:51 crc kubenswrapper[4688]: E0123 19:14:51.357278 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:14:51 crc kubenswrapper[4688]: I0123 19:14:51.368336 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da305480-956a-4114-aff9-467c8044ac3b" path="/var/lib/kubelet/pods/da305480-956a-4114-aff9-467c8044ac3b/volumes" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.185932 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b"] Jan 23 19:15:00 crc kubenswrapper[4688]: E0123 19:15:00.187732 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="registry-server" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.187830 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="registry-server" Jan 23 19:15:00 crc kubenswrapper[4688]: E0123 19:15:00.187911 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="extract-content" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.187972 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="extract-content" Jan 23 19:15:00 crc kubenswrapper[4688]: E0123 19:15:00.188054 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="extract-utilities" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.188120 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="extract-utilities" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.188386 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="da305480-956a-4114-aff9-467c8044ac3b" containerName="registry-server" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.189250 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.192122 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.192483 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.197508 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b"] Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.286874 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-config-volume\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.286936 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxbx\" (UniqueName: \"kubernetes.io/projected/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-kube-api-access-fqxbx\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.287126 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-secret-volume\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.388872 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-secret-volume\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.389015 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-config-volume\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.389058 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqxbx\" (UniqueName: \"kubernetes.io/projected/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-kube-api-access-fqxbx\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.390289 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-config-volume\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.405271 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-secret-volume\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.410405 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqxbx\" (UniqueName: \"kubernetes.io/projected/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-kube-api-access-fqxbx\") pod \"collect-profiles-29486595-v7b4b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.517621 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:00 crc kubenswrapper[4688]: I0123 19:15:00.999133 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b"] Jan 23 19:15:01 crc kubenswrapper[4688]: I0123 19:15:01.884429 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" event={"ID":"4e03f7ce-cace-498e-ad82-1a62f9fcc01b","Type":"ContainerStarted","Data":"1a75fe91321e35fabb05ec4cfc67826c238428c228badf148cd477bb9dfdc9bd"} Jan 23 19:15:01 crc kubenswrapper[4688]: I0123 19:15:01.884718 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" event={"ID":"4e03f7ce-cace-498e-ad82-1a62f9fcc01b","Type":"ContainerStarted","Data":"55bfc37e10e1d4d372797dcb095c6726b0f5b87fbf2a417820aba6ff0e71349b"} Jan 23 19:15:01 crc kubenswrapper[4688]: I0123 19:15:01.906535 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" podStartSLOduration=1.90651436 podStartE2EDuration="1.90651436s" podCreationTimestamp="2026-01-23 19:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:15:01.902499564 +0000 UTC m=+4096.898324005" watchObservedRunningTime="2026-01-23 19:15:01.90651436 +0000 UTC m=+4096.902338791" Jan 23 19:15:02 crc kubenswrapper[4688]: I0123 19:15:02.897781 4688 generic.go:334] "Generic (PLEG): container finished" podID="4e03f7ce-cace-498e-ad82-1a62f9fcc01b" containerID="1a75fe91321e35fabb05ec4cfc67826c238428c228badf148cd477bb9dfdc9bd" exitCode=0 Jan 23 19:15:02 crc kubenswrapper[4688]: I0123 19:15:02.897875 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" event={"ID":"4e03f7ce-cace-498e-ad82-1a62f9fcc01b","Type":"ContainerDied","Data":"1a75fe91321e35fabb05ec4cfc67826c238428c228badf148cd477bb9dfdc9bd"} Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.334156 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.376148 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-config-volume\") pod \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.376286 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-secret-volume\") pod \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.376504 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqxbx\" (UniqueName: \"kubernetes.io/projected/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-kube-api-access-fqxbx\") pod \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\" (UID: \"4e03f7ce-cace-498e-ad82-1a62f9fcc01b\") " Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.377566 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e03f7ce-cace-498e-ad82-1a62f9fcc01b" (UID: "4e03f7ce-cace-498e-ad82-1a62f9fcc01b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.384794 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-kube-api-access-fqxbx" (OuterVolumeSpecName: "kube-api-access-fqxbx") pod "4e03f7ce-cace-498e-ad82-1a62f9fcc01b" (UID: "4e03f7ce-cace-498e-ad82-1a62f9fcc01b"). InnerVolumeSpecName "kube-api-access-fqxbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.388333 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e03f7ce-cace-498e-ad82-1a62f9fcc01b" (UID: "4e03f7ce-cace-498e-ad82-1a62f9fcc01b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.503884 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqxbx\" (UniqueName: \"kubernetes.io/projected/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-kube-api-access-fqxbx\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.503969 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.503980 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e03f7ce-cace-498e-ad82-1a62f9fcc01b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.914609 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" event={"ID":"4e03f7ce-cace-498e-ad82-1a62f9fcc01b","Type":"ContainerDied","Data":"55bfc37e10e1d4d372797dcb095c6726b0f5b87fbf2a417820aba6ff0e71349b"} Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.914873 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55bfc37e10e1d4d372797dcb095c6726b0f5b87fbf2a417820aba6ff0e71349b" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.914697 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486595-v7b4b" Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.984749 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9"] Jan 23 19:15:04 crc kubenswrapper[4688]: I0123 19:15:04.994038 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486550-crlt9"] Jan 23 19:15:05 crc kubenswrapper[4688]: I0123 19:15:05.380705 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0" path="/var/lib/kubelet/pods/5aa3c3ea-51fe-4e67-b4f4-933b5e5f35b0/volumes" Jan 23 19:15:06 crc kubenswrapper[4688]: I0123 19:15:06.356864 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:15:06 crc kubenswrapper[4688]: E0123 19:15:06.357232 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:15:20 crc kubenswrapper[4688]: I0123 19:15:20.358051 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:15:20 crc kubenswrapper[4688]: E0123 19:15:20.359554 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:15:34 crc kubenswrapper[4688]: I0123 19:15:34.356611 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:15:34 crc kubenswrapper[4688]: E0123 19:15:34.357498 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.098692 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5d7pf"] Jan 23 19:15:44 crc kubenswrapper[4688]: E0123 19:15:44.099618 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e03f7ce-cace-498e-ad82-1a62f9fcc01b" containerName="collect-profiles" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.099630 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e03f7ce-cace-498e-ad82-1a62f9fcc01b" containerName="collect-profiles" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.099814 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e03f7ce-cace-498e-ad82-1a62f9fcc01b" containerName="collect-profiles" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.101222 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.113785 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d7pf"] Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.205229 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-catalog-content\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.205617 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-utilities\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.205904 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnbp7\" (UniqueName: \"kubernetes.io/projected/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-kube-api-access-dnbp7\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.307481 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-catalog-content\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.307574 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-utilities\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.307650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnbp7\" (UniqueName: \"kubernetes.io/projected/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-kube-api-access-dnbp7\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.308110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-catalog-content\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.308174 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-utilities\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.340288 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnbp7\" (UniqueName: \"kubernetes.io/projected/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-kube-api-access-dnbp7\") pod \"redhat-marketplace-5d7pf\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:44 crc kubenswrapper[4688]: I0123 19:15:44.429074 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:45 crc kubenswrapper[4688]: I0123 19:15:45.597315 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d7pf"] Jan 23 19:15:46 crc kubenswrapper[4688]: I0123 19:15:46.299275 4688 generic.go:334] "Generic (PLEG): container finished" podID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerID="9011924ad0f80485a8c6d153772866bfbea386571d7834ce73b81e24461c2d94" exitCode=0 Jan 23 19:15:46 crc kubenswrapper[4688]: I0123 19:15:46.299337 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d7pf" event={"ID":"8e784435-d628-4dbf-9a86-d3bc83c9c5ac","Type":"ContainerDied","Data":"9011924ad0f80485a8c6d153772866bfbea386571d7834ce73b81e24461c2d94"} Jan 23 19:15:46 crc kubenswrapper[4688]: I0123 19:15:46.299570 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d7pf" event={"ID":"8e784435-d628-4dbf-9a86-d3bc83c9c5ac","Type":"ContainerStarted","Data":"b88b71480f6da24371858023fb03f93b9ddfbd0c789438ad57918b32c067da8c"} Jan 23 19:15:47 crc kubenswrapper[4688]: I0123 19:15:47.357635 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:15:47 crc kubenswrapper[4688]: E0123 19:15:47.358546 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:15:48 crc kubenswrapper[4688]: I0123 19:15:48.318565 4688 generic.go:334] "Generic (PLEG): container finished" podID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerID="a2d9903c0a9a7e591912a3c3fe7569ce51130d59922d3dedde87d9c5aad38b07" exitCode=0 Jan 23 19:15:48 crc kubenswrapper[4688]: I0123 19:15:48.318608 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d7pf" event={"ID":"8e784435-d628-4dbf-9a86-d3bc83c9c5ac","Type":"ContainerDied","Data":"a2d9903c0a9a7e591912a3c3fe7569ce51130d59922d3dedde87d9c5aad38b07"} Jan 23 19:15:49 crc kubenswrapper[4688]: I0123 19:15:49.329823 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d7pf" event={"ID":"8e784435-d628-4dbf-9a86-d3bc83c9c5ac","Type":"ContainerStarted","Data":"c7eed17f8011ffab6eb2dde5e51be682319a657826814b48e7281fbb290b14e4"} Jan 23 19:15:49 crc kubenswrapper[4688]: I0123 19:15:49.356878 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5d7pf" podStartSLOduration=2.865874678 podStartE2EDuration="5.356855415s" podCreationTimestamp="2026-01-23 19:15:44 +0000 UTC" firstStartedPulling="2026-01-23 19:15:46.301198562 +0000 UTC m=+4141.297023003" lastFinishedPulling="2026-01-23 19:15:48.792179299 +0000 UTC m=+4143.788003740" observedRunningTime="2026-01-23 19:15:49.348593047 +0000 UTC m=+4144.344417508" watchObservedRunningTime="2026-01-23 19:15:49.356855415 +0000 UTC m=+4144.352679856" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.599463 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvc2h"] Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.602826 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.608556 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvc2h"] Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.793852 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92mp\" (UniqueName: \"kubernetes.io/projected/61b13533-d9e1-4e3a-a302-86893ad967cf-kube-api-access-q92mp\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.793903 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-utilities\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.794163 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-catalog-content\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.895948 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-catalog-content\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.896040 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92mp\" (UniqueName: \"kubernetes.io/projected/61b13533-d9e1-4e3a-a302-86893ad967cf-kube-api-access-q92mp\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.896062 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-utilities\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.896623 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-catalog-content\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:52 crc kubenswrapper[4688]: I0123 19:15:52.896740 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-utilities\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:53 crc kubenswrapper[4688]: I0123 19:15:53.169891 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92mp\" (UniqueName: \"kubernetes.io/projected/61b13533-d9e1-4e3a-a302-86893ad967cf-kube-api-access-q92mp\") pod \"redhat-operators-rvc2h\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:53 crc kubenswrapper[4688]: I0123 19:15:53.235508 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:15:53 crc kubenswrapper[4688]: I0123 19:15:53.734334 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvc2h"] Jan 23 19:15:54 crc kubenswrapper[4688]: I0123 19:15:54.397380 4688 generic.go:334] "Generic (PLEG): container finished" podID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerID="b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705" exitCode=0 Jan 23 19:15:54 crc kubenswrapper[4688]: I0123 19:15:54.397474 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvc2h" event={"ID":"61b13533-d9e1-4e3a-a302-86893ad967cf","Type":"ContainerDied","Data":"b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705"} Jan 23 19:15:54 crc kubenswrapper[4688]: I0123 19:15:54.397737 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvc2h" event={"ID":"61b13533-d9e1-4e3a-a302-86893ad967cf","Type":"ContainerStarted","Data":"947d057375baa09349b20b71aebd98fcf75706265e3e959b36321fa2f7e43d5f"} Jan 23 19:15:54 crc kubenswrapper[4688]: I0123 19:15:54.430870 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:54 crc kubenswrapper[4688]: I0123 19:15:54.430948 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:54 crc kubenswrapper[4688]: I0123 19:15:54.493101 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:55 crc kubenswrapper[4688]: I0123 19:15:55.407911 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvc2h" event={"ID":"61b13533-d9e1-4e3a-a302-86893ad967cf","Type":"ContainerStarted","Data":"ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7"} Jan 23 19:15:55 crc kubenswrapper[4688]: I0123 19:15:55.458573 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:56 crc kubenswrapper[4688]: I0123 19:15:56.768845 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d7pf"] Jan 23 19:15:57 crc kubenswrapper[4688]: I0123 19:15:57.431502 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5d7pf" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="registry-server" containerID="cri-o://c7eed17f8011ffab6eb2dde5e51be682319a657826814b48e7281fbb290b14e4" gracePeriod=2 Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.460199 4688 generic.go:334] "Generic (PLEG): container finished" podID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerID="c7eed17f8011ffab6eb2dde5e51be682319a657826814b48e7281fbb290b14e4" exitCode=0 Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.460269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d7pf" event={"ID":"8e784435-d628-4dbf-9a86-d3bc83c9c5ac","Type":"ContainerDied","Data":"c7eed17f8011ffab6eb2dde5e51be682319a657826814b48e7281fbb290b14e4"} Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.462849 4688 generic.go:334] "Generic (PLEG): container finished" podID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerID="ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7" exitCode=0 Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.462892 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvc2h" event={"ID":"61b13533-d9e1-4e3a-a302-86893ad967cf","Type":"ContainerDied","Data":"ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7"} Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.551393 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.591598 4688 scope.go:117] "RemoveContainer" containerID="135497b468f7a754e8a4fd47bf8448c2769a1e61ba86b493976da194e1d99baa" Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.623118 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-utilities\") pod \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.623326 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-catalog-content\") pod \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.623445 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnbp7\" (UniqueName: \"kubernetes.io/projected/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-kube-api-access-dnbp7\") pod \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\" (UID: \"8e784435-d628-4dbf-9a86-d3bc83c9c5ac\") " Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.624729 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-utilities" (OuterVolumeSpecName: "utilities") pod "8e784435-d628-4dbf-9a86-d3bc83c9c5ac" (UID: "8e784435-d628-4dbf-9a86-d3bc83c9c5ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.637517 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-kube-api-access-dnbp7" (OuterVolumeSpecName: "kube-api-access-dnbp7") pod "8e784435-d628-4dbf-9a86-d3bc83c9c5ac" (UID: "8e784435-d628-4dbf-9a86-d3bc83c9c5ac"). InnerVolumeSpecName "kube-api-access-dnbp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.651906 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e784435-d628-4dbf-9a86-d3bc83c9c5ac" (UID: "8e784435-d628-4dbf-9a86-d3bc83c9c5ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.725800 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.725841 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:58 crc kubenswrapper[4688]: I0123 19:15:58.725853 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnbp7\" (UniqueName: \"kubernetes.io/projected/8e784435-d628-4dbf-9a86-d3bc83c9c5ac-kube-api-access-dnbp7\") on node \"crc\" DevicePath \"\"" Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.487137 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvc2h" event={"ID":"61b13533-d9e1-4e3a-a302-86893ad967cf","Type":"ContainerStarted","Data":"d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b"} Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.492406 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d7pf" event={"ID":"8e784435-d628-4dbf-9a86-d3bc83c9c5ac","Type":"ContainerDied","Data":"b88b71480f6da24371858023fb03f93b9ddfbd0c789438ad57918b32c067da8c"} Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.492458 4688 scope.go:117] "RemoveContainer" containerID="c7eed17f8011ffab6eb2dde5e51be682319a657826814b48e7281fbb290b14e4" Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.492525 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d7pf" Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.509595 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvc2h" podStartSLOduration=2.936756661 podStartE2EDuration="7.509574853s" podCreationTimestamp="2026-01-23 19:15:52 +0000 UTC" firstStartedPulling="2026-01-23 19:15:54.399398541 +0000 UTC m=+4149.395222972" lastFinishedPulling="2026-01-23 19:15:58.972216723 +0000 UTC m=+4153.968041164" observedRunningTime="2026-01-23 19:15:59.504978221 +0000 UTC m=+4154.500802672" watchObservedRunningTime="2026-01-23 19:15:59.509574853 +0000 UTC m=+4154.505399294" Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.521064 4688 scope.go:117] "RemoveContainer" containerID="a2d9903c0a9a7e591912a3c3fe7569ce51130d59922d3dedde87d9c5aad38b07" Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.529129 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d7pf"] Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.538736 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d7pf"] Jan 23 19:15:59 crc kubenswrapper[4688]: I0123 19:15:59.542148 4688 scope.go:117] "RemoveContainer" containerID="9011924ad0f80485a8c6d153772866bfbea386571d7834ce73b81e24461c2d94" Jan 23 19:16:01 crc kubenswrapper[4688]: I0123 19:16:01.357203 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:16:01 crc kubenswrapper[4688]: E0123 19:16:01.358606 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:16:01 crc kubenswrapper[4688]: I0123 19:16:01.367290 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" path="/var/lib/kubelet/pods/8e784435-d628-4dbf-9a86-d3bc83c9c5ac/volumes" Jan 23 19:16:03 crc kubenswrapper[4688]: I0123 19:16:03.236695 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:16:03 crc kubenswrapper[4688]: I0123 19:16:03.237057 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:16:04 crc kubenswrapper[4688]: I0123 19:16:04.296246 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvc2h" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="registry-server" probeResult="failure" output=< Jan 23 19:16:04 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:16:04 crc kubenswrapper[4688]: > Jan 23 19:16:13 crc kubenswrapper[4688]: I0123 19:16:13.291319 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:16:13 crc kubenswrapper[4688]: I0123 19:16:13.343895 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:16:13 crc kubenswrapper[4688]: I0123 19:16:13.536111 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvc2h"] Jan 23 19:16:14 crc kubenswrapper[4688]: I0123 19:16:14.650429 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvc2h" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="registry-server" containerID="cri-o://d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b" gracePeriod=2 Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.365636 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.387871 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.397612 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-utilities\") pod \"61b13533-d9e1-4e3a-a302-86893ad967cf\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.397667 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q92mp\" (UniqueName: \"kubernetes.io/projected/61b13533-d9e1-4e3a-a302-86893ad967cf-kube-api-access-q92mp\") pod \"61b13533-d9e1-4e3a-a302-86893ad967cf\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.397971 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-catalog-content\") pod \"61b13533-d9e1-4e3a-a302-86893ad967cf\" (UID: \"61b13533-d9e1-4e3a-a302-86893ad967cf\") " Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.398817 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-utilities" (OuterVolumeSpecName: "utilities") pod "61b13533-d9e1-4e3a-a302-86893ad967cf" (UID: "61b13533-d9e1-4e3a-a302-86893ad967cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.399834 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.405703 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b13533-d9e1-4e3a-a302-86893ad967cf-kube-api-access-q92mp" (OuterVolumeSpecName: "kube-api-access-q92mp") pod "61b13533-d9e1-4e3a-a302-86893ad967cf" (UID: "61b13533-d9e1-4e3a-a302-86893ad967cf"). InnerVolumeSpecName "kube-api-access-q92mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.502502 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q92mp\" (UniqueName: \"kubernetes.io/projected/61b13533-d9e1-4e3a-a302-86893ad967cf-kube-api-access-q92mp\") on node \"crc\" DevicePath \"\"" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.550099 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61b13533-d9e1-4e3a-a302-86893ad967cf" (UID: "61b13533-d9e1-4e3a-a302-86893ad967cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.604049 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b13533-d9e1-4e3a-a302-86893ad967cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.664230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"66d14a14997f9b3a62d06bbb1755f44068e6ee26f77e02b6d0e1a36f44eba21d"} Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.676476 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvc2h" event={"ID":"61b13533-d9e1-4e3a-a302-86893ad967cf","Type":"ContainerDied","Data":"d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b"} Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.676539 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvc2h" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.677919 4688 generic.go:334] "Generic (PLEG): container finished" podID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerID="d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b" exitCode=0 Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.676552 4688 scope.go:117] "RemoveContainer" containerID="d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.678012 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvc2h" event={"ID":"61b13533-d9e1-4e3a-a302-86893ad967cf","Type":"ContainerDied","Data":"947d057375baa09349b20b71aebd98fcf75706265e3e959b36321fa2f7e43d5f"} Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.742292 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvc2h"] Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.744941 4688 scope.go:117] "RemoveContainer" containerID="ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.761793 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvc2h"] Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.776591 4688 scope.go:117] "RemoveContainer" containerID="b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.800023 4688 scope.go:117] "RemoveContainer" containerID="d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b" Jan 23 19:16:15 crc kubenswrapper[4688]: E0123 19:16:15.801342 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b\": container with ID starting with d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b not found: ID does not exist" containerID="d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.801400 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b"} err="failed to get container status \"d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b\": rpc error: code = NotFound desc = could not find container \"d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b\": container with ID starting with d465f017c62b0d4df40797a20cee4c495aee2fa473727ab01ac38b3cc6328d7b not found: ID does not exist" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.801437 4688 scope.go:117] "RemoveContainer" containerID="ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7" Jan 23 19:16:15 crc kubenswrapper[4688]: E0123 19:16:15.803429 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7\": container with ID starting with ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7 not found: ID does not exist" containerID="ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.803457 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7"} err="failed to get container status \"ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7\": rpc error: code = NotFound desc = could not find container \"ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7\": container with ID starting with ba110653d03655ba1e064f446af169e706d55584f18f11b03174357a53f519d7 not found: ID does not exist" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.803474 4688 scope.go:117] "RemoveContainer" containerID="b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705" Jan 23 19:16:15 crc kubenswrapper[4688]: E0123 19:16:15.803946 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705\": container with ID starting with b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705 not found: ID does not exist" containerID="b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705" Jan 23 19:16:15 crc kubenswrapper[4688]: I0123 19:16:15.803964 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705"} err="failed to get container status \"b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705\": rpc error: code = NotFound desc = could not find container \"b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705\": container with ID starting with b6d3cc3c599ea73e1ca268f734a047809ca8e6498e6bc61dd4511f4cfdaeb705 not found: ID does not exist" Jan 23 19:16:17 crc kubenswrapper[4688]: I0123 19:16:17.393219 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" path="/var/lib/kubelet/pods/61b13533-d9e1-4e3a-a302-86893ad967cf/volumes" Jan 23 19:18:36 crc kubenswrapper[4688]: I0123 19:18:36.965569 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:18:36 crc kubenswrapper[4688]: I0123 19:18:36.967378 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:19:06 crc kubenswrapper[4688]: I0123 19:19:06.966023 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:19:06 crc kubenswrapper[4688]: I0123 19:19:06.966598 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:19:36 crc kubenswrapper[4688]: I0123 19:19:36.965863 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:19:36 crc kubenswrapper[4688]: I0123 19:19:36.966510 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:19:36 crc kubenswrapper[4688]: I0123 19:19:36.966571 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:19:36 crc kubenswrapper[4688]: I0123 19:19:36.967479 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66d14a14997f9b3a62d06bbb1755f44068e6ee26f77e02b6d0e1a36f44eba21d"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:19:36 crc kubenswrapper[4688]: I0123 19:19:36.967543 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://66d14a14997f9b3a62d06bbb1755f44068e6ee26f77e02b6d0e1a36f44eba21d" gracePeriod=600 Jan 23 19:19:37 crc kubenswrapper[4688]: I0123 19:19:37.744381 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="66d14a14997f9b3a62d06bbb1755f44068e6ee26f77e02b6d0e1a36f44eba21d" exitCode=0 Jan 23 19:19:37 crc kubenswrapper[4688]: I0123 19:19:37.744445 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"66d14a14997f9b3a62d06bbb1755f44068e6ee26f77e02b6d0e1a36f44eba21d"} Jan 23 19:19:37 crc kubenswrapper[4688]: I0123 19:19:37.745014 4688 scope.go:117] "RemoveContainer" containerID="dc288d03c0fff902cdc80cea32c74d2fc462968ef734923bce00a664a76ee7d8" Jan 23 19:19:38 crc kubenswrapper[4688]: I0123 19:19:38.769527 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab"} Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.211202 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vrg5d"] Jan 23 19:21:11 crc kubenswrapper[4688]: E0123 19:21:11.212382 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="extract-utilities" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212400 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="extract-utilities" Jan 23 19:21:11 crc kubenswrapper[4688]: E0123 19:21:11.212424 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="extract-content" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212434 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="extract-content" Jan 23 19:21:11 crc kubenswrapper[4688]: E0123 19:21:11.212464 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="extract-utilities" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212471 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="extract-utilities" Jan 23 19:21:11 crc kubenswrapper[4688]: E0123 19:21:11.212497 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="extract-content" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212508 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="extract-content" Jan 23 19:21:11 crc kubenswrapper[4688]: E0123 19:21:11.212526 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="registry-server" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212534 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="registry-server" Jan 23 19:21:11 crc kubenswrapper[4688]: E0123 19:21:11.212562 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="registry-server" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212570 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="registry-server" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212812 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b13533-d9e1-4e3a-a302-86893ad967cf" containerName="registry-server" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.212835 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e784435-d628-4dbf-9a86-d3bc83c9c5ac" containerName="registry-server" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.214675 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.228307 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrg5d"] Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.406934 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-catalog-content\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.407121 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-utilities\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.407290 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8msm\" (UniqueName: \"kubernetes.io/projected/66f2b002-3175-47cb-9d1e-1173fb507fc4-kube-api-access-c8msm\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.508995 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8msm\" (UniqueName: \"kubernetes.io/projected/66f2b002-3175-47cb-9d1e-1173fb507fc4-kube-api-access-c8msm\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.509099 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-catalog-content\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.509241 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-utilities\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.509780 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-catalog-content\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.510472 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-utilities\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.530908 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8msm\" (UniqueName: \"kubernetes.io/projected/66f2b002-3175-47cb-9d1e-1173fb507fc4-kube-api-access-c8msm\") pod \"community-operators-vrg5d\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:11 crc kubenswrapper[4688]: I0123 19:21:11.573702 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:12 crc kubenswrapper[4688]: I0123 19:21:12.173414 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vrg5d"] Jan 23 19:21:12 crc kubenswrapper[4688]: I0123 19:21:12.730317 4688 generic.go:334] "Generic (PLEG): container finished" podID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerID="5181228dbf4c0b4c01d77faa590f45719fb14830064fa8ea6c9caea68e786921" exitCode=0 Jan 23 19:21:12 crc kubenswrapper[4688]: I0123 19:21:12.730479 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg5d" event={"ID":"66f2b002-3175-47cb-9d1e-1173fb507fc4","Type":"ContainerDied","Data":"5181228dbf4c0b4c01d77faa590f45719fb14830064fa8ea6c9caea68e786921"} Jan 23 19:21:12 crc kubenswrapper[4688]: I0123 19:21:12.730597 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg5d" event={"ID":"66f2b002-3175-47cb-9d1e-1173fb507fc4","Type":"ContainerStarted","Data":"1a16345035cf7ba28e9db0092407ff0b2aa694b4fe3e40afcf440bb58e1a7d73"} Jan 23 19:21:12 crc kubenswrapper[4688]: I0123 19:21:12.733446 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:21:13 crc kubenswrapper[4688]: I0123 19:21:13.745160 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg5d" event={"ID":"66f2b002-3175-47cb-9d1e-1173fb507fc4","Type":"ContainerStarted","Data":"82c6fc1055117fc73949215f3f44c34f51033feb16ff4a1827ad321d4af25ac9"} Jan 23 19:21:14 crc kubenswrapper[4688]: I0123 19:21:14.757142 4688 generic.go:334] "Generic (PLEG): container finished" podID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerID="82c6fc1055117fc73949215f3f44c34f51033feb16ff4a1827ad321d4af25ac9" exitCode=0 Jan 23 19:21:14 crc kubenswrapper[4688]: I0123 19:21:14.757230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg5d" event={"ID":"66f2b002-3175-47cb-9d1e-1173fb507fc4","Type":"ContainerDied","Data":"82c6fc1055117fc73949215f3f44c34f51033feb16ff4a1827ad321d4af25ac9"} Jan 23 19:21:15 crc kubenswrapper[4688]: I0123 19:21:15.768945 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg5d" event={"ID":"66f2b002-3175-47cb-9d1e-1173fb507fc4","Type":"ContainerStarted","Data":"bcfcefd9aa5a92bbaf3e1a20c08042976c93f49966bec8868ea3fb57ba5f99d7"} Jan 23 19:21:15 crc kubenswrapper[4688]: I0123 19:21:15.795667 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vrg5d" podStartSLOduration=2.133471626 podStartE2EDuration="4.795643807s" podCreationTimestamp="2026-01-23 19:21:11 +0000 UTC" firstStartedPulling="2026-01-23 19:21:12.73296499 +0000 UTC m=+4467.728789431" lastFinishedPulling="2026-01-23 19:21:15.395137171 +0000 UTC m=+4470.390961612" observedRunningTime="2026-01-23 19:21:15.785110125 +0000 UTC m=+4470.780934566" watchObservedRunningTime="2026-01-23 19:21:15.795643807 +0000 UTC m=+4470.791468258" Jan 23 19:21:21 crc kubenswrapper[4688]: I0123 19:21:21.574463 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:21 crc kubenswrapper[4688]: I0123 19:21:21.575009 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:21 crc kubenswrapper[4688]: I0123 19:21:21.619753 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:21 crc kubenswrapper[4688]: I0123 19:21:21.884618 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:22 crc kubenswrapper[4688]: I0123 19:21:22.396738 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrg5d"] Jan 23 19:21:23 crc kubenswrapper[4688]: I0123 19:21:23.852102 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vrg5d" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="registry-server" containerID="cri-o://bcfcefd9aa5a92bbaf3e1a20c08042976c93f49966bec8868ea3fb57ba5f99d7" gracePeriod=2 Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.863518 4688 generic.go:334] "Generic (PLEG): container finished" podID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerID="bcfcefd9aa5a92bbaf3e1a20c08042976c93f49966bec8868ea3fb57ba5f99d7" exitCode=0 Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.863606 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg5d" event={"ID":"66f2b002-3175-47cb-9d1e-1173fb507fc4","Type":"ContainerDied","Data":"bcfcefd9aa5a92bbaf3e1a20c08042976c93f49966bec8868ea3fb57ba5f99d7"} Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.863918 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vrg5d" event={"ID":"66f2b002-3175-47cb-9d1e-1173fb507fc4","Type":"ContainerDied","Data":"1a16345035cf7ba28e9db0092407ff0b2aa694b4fe3e40afcf440bb58e1a7d73"} Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.863936 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a16345035cf7ba28e9db0092407ff0b2aa694b4fe3e40afcf440bb58e1a7d73" Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.886336 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.910092 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-catalog-content\") pod \"66f2b002-3175-47cb-9d1e-1173fb507fc4\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.912553 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-utilities\") pod \"66f2b002-3175-47cb-9d1e-1173fb507fc4\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.912786 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8msm\" (UniqueName: \"kubernetes.io/projected/66f2b002-3175-47cb-9d1e-1173fb507fc4-kube-api-access-c8msm\") pod \"66f2b002-3175-47cb-9d1e-1173fb507fc4\" (UID: \"66f2b002-3175-47cb-9d1e-1173fb507fc4\") " Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.913633 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-utilities" (OuterVolumeSpecName: "utilities") pod "66f2b002-3175-47cb-9d1e-1173fb507fc4" (UID: "66f2b002-3175-47cb-9d1e-1173fb507fc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.919626 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f2b002-3175-47cb-9d1e-1173fb507fc4-kube-api-access-c8msm" (OuterVolumeSpecName: "kube-api-access-c8msm") pod "66f2b002-3175-47cb-9d1e-1173fb507fc4" (UID: "66f2b002-3175-47cb-9d1e-1173fb507fc4"). InnerVolumeSpecName "kube-api-access-c8msm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:21:24 crc kubenswrapper[4688]: I0123 19:21:24.966793 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66f2b002-3175-47cb-9d1e-1173fb507fc4" (UID: "66f2b002-3175-47cb-9d1e-1173fb507fc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:21:25 crc kubenswrapper[4688]: I0123 19:21:25.016226 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:25 crc kubenswrapper[4688]: I0123 19:21:25.016267 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8msm\" (UniqueName: \"kubernetes.io/projected/66f2b002-3175-47cb-9d1e-1173fb507fc4-kube-api-access-c8msm\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:25 crc kubenswrapper[4688]: I0123 19:21:25.016277 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66f2b002-3175-47cb-9d1e-1173fb507fc4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:21:25 crc kubenswrapper[4688]: I0123 19:21:25.887942 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vrg5d" Jan 23 19:21:25 crc kubenswrapper[4688]: I0123 19:21:25.915400 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vrg5d"] Jan 23 19:21:25 crc kubenswrapper[4688]: I0123 19:21:25.924437 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vrg5d"] Jan 23 19:21:27 crc kubenswrapper[4688]: I0123 19:21:27.377495 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" path="/var/lib/kubelet/pods/66f2b002-3175-47cb-9d1e-1173fb507fc4/volumes" Jan 23 19:22:06 crc kubenswrapper[4688]: I0123 19:22:06.965506 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:22:06 crc kubenswrapper[4688]: I0123 19:22:06.966063 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:22:36 crc kubenswrapper[4688]: I0123 19:22:36.965238 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:22:36 crc kubenswrapper[4688]: I0123 19:22:36.965922 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:23:06 crc kubenswrapper[4688]: I0123 19:23:06.965401 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:23:06 crc kubenswrapper[4688]: I0123 19:23:06.965931 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:23:06 crc kubenswrapper[4688]: I0123 19:23:06.965988 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:23:06 crc kubenswrapper[4688]: I0123 19:23:06.966937 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:23:06 crc kubenswrapper[4688]: I0123 19:23:06.967009 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" gracePeriod=600 Jan 23 19:23:07 crc kubenswrapper[4688]: E0123 19:23:07.109756 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:23:07 crc kubenswrapper[4688]: I0123 19:23:07.313746 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" exitCode=0 Jan 23 19:23:07 crc kubenswrapper[4688]: I0123 19:23:07.313783 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab"} Jan 23 19:23:07 crc kubenswrapper[4688]: I0123 19:23:07.313843 4688 scope.go:117] "RemoveContainer" containerID="66d14a14997f9b3a62d06bbb1755f44068e6ee26f77e02b6d0e1a36f44eba21d" Jan 23 19:23:07 crc kubenswrapper[4688]: I0123 19:23:07.314511 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:23:07 crc kubenswrapper[4688]: E0123 19:23:07.314782 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:23:20 crc kubenswrapper[4688]: I0123 19:23:20.357121 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:23:20 crc kubenswrapper[4688]: E0123 19:23:20.358089 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:23:34 crc kubenswrapper[4688]: I0123 19:23:34.356331 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:23:34 crc kubenswrapper[4688]: E0123 19:23:34.357015 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:23:46 crc kubenswrapper[4688]: I0123 19:23:46.356808 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:23:46 crc kubenswrapper[4688]: E0123 19:23:46.357738 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:23:58 crc kubenswrapper[4688]: I0123 19:23:58.356615 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:23:58 crc kubenswrapper[4688]: E0123 19:23:58.358326 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:24:09 crc kubenswrapper[4688]: I0123 19:24:09.356891 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:24:09 crc kubenswrapper[4688]: E0123 19:24:09.357614 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:24:22 crc kubenswrapper[4688]: I0123 19:24:22.356281 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:24:22 crc kubenswrapper[4688]: E0123 19:24:22.357645 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:24:36 crc kubenswrapper[4688]: I0123 19:24:36.356817 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:24:36 crc kubenswrapper[4688]: E0123 19:24:36.357832 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.382733 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bcbv2"] Jan 23 19:24:38 crc kubenswrapper[4688]: E0123 19:24:38.383570 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="extract-utilities" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.383590 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="extract-utilities" Jan 23 19:24:38 crc kubenswrapper[4688]: E0123 19:24:38.383605 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="registry-server" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.383614 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="registry-server" Jan 23 19:24:38 crc kubenswrapper[4688]: E0123 19:24:38.383636 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="extract-content" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.383643 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="extract-content" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.383947 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f2b002-3175-47cb-9d1e-1173fb507fc4" containerName="registry-server" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.385823 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.398720 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcbv2"] Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.585971 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-catalog-content\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.586027 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wf8v\" (UniqueName: \"kubernetes.io/projected/676a3693-965b-4dfa-ae59-b7d020bbbbf4-kube-api-access-8wf8v\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.586414 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-utilities\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.689084 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-catalog-content\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.689154 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wf8v\" (UniqueName: \"kubernetes.io/projected/676a3693-965b-4dfa-ae59-b7d020bbbbf4-kube-api-access-8wf8v\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.689297 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-utilities\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.689720 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-catalog-content\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:38 crc kubenswrapper[4688]: I0123 19:24:38.689772 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-utilities\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:39 crc kubenswrapper[4688]: I0123 19:24:39.170808 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wf8v\" (UniqueName: \"kubernetes.io/projected/676a3693-965b-4dfa-ae59-b7d020bbbbf4-kube-api-access-8wf8v\") pod \"certified-operators-bcbv2\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:39 crc kubenswrapper[4688]: I0123 19:24:39.306590 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:39 crc kubenswrapper[4688]: I0123 19:24:39.810010 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcbv2"] Jan 23 19:24:40 crc kubenswrapper[4688]: I0123 19:24:40.219301 4688 generic.go:334] "Generic (PLEG): container finished" podID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerID="6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6" exitCode=0 Jan 23 19:24:40 crc kubenswrapper[4688]: I0123 19:24:40.219516 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcbv2" event={"ID":"676a3693-965b-4dfa-ae59-b7d020bbbbf4","Type":"ContainerDied","Data":"6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6"} Jan 23 19:24:40 crc kubenswrapper[4688]: I0123 19:24:40.219664 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcbv2" event={"ID":"676a3693-965b-4dfa-ae59-b7d020bbbbf4","Type":"ContainerStarted","Data":"6d5bd3b3d5611270e4f95938c879b2c5984a8139be1b7340e8dd4a7a1d8eecde"} Jan 23 19:24:42 crc kubenswrapper[4688]: I0123 19:24:42.242098 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcbv2" event={"ID":"676a3693-965b-4dfa-ae59-b7d020bbbbf4","Type":"ContainerStarted","Data":"e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66"} Jan 23 19:24:43 crc kubenswrapper[4688]: I0123 19:24:43.256536 4688 generic.go:334] "Generic (PLEG): container finished" podID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerID="e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66" exitCode=0 Jan 23 19:24:43 crc kubenswrapper[4688]: I0123 19:24:43.256603 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcbv2" event={"ID":"676a3693-965b-4dfa-ae59-b7d020bbbbf4","Type":"ContainerDied","Data":"e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66"} Jan 23 19:24:44 crc kubenswrapper[4688]: I0123 19:24:44.267380 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcbv2" event={"ID":"676a3693-965b-4dfa-ae59-b7d020bbbbf4","Type":"ContainerStarted","Data":"0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3"} Jan 23 19:24:44 crc kubenswrapper[4688]: I0123 19:24:44.283071 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bcbv2" podStartSLOduration=2.6498266790000002 podStartE2EDuration="6.283051292s" podCreationTimestamp="2026-01-23 19:24:38 +0000 UTC" firstStartedPulling="2026-01-23 19:24:40.22143099 +0000 UTC m=+4675.217255431" lastFinishedPulling="2026-01-23 19:24:43.854655603 +0000 UTC m=+4678.850480044" observedRunningTime="2026-01-23 19:24:44.282050783 +0000 UTC m=+4679.277875214" watchObservedRunningTime="2026-01-23 19:24:44.283051292 +0000 UTC m=+4679.278875733" Jan 23 19:24:48 crc kubenswrapper[4688]: I0123 19:24:48.357049 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:24:48 crc kubenswrapper[4688]: E0123 19:24:48.358140 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:24:49 crc kubenswrapper[4688]: I0123 19:24:49.307076 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:49 crc kubenswrapper[4688]: I0123 19:24:49.307439 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:49 crc kubenswrapper[4688]: I0123 19:24:49.812899 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:49 crc kubenswrapper[4688]: I0123 19:24:49.871116 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:50 crc kubenswrapper[4688]: I0123 19:24:50.051082 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bcbv2"] Jan 23 19:24:51 crc kubenswrapper[4688]: I0123 19:24:51.343808 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bcbv2" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="registry-server" containerID="cri-o://0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3" gracePeriod=2 Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.262397 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.353120 4688 generic.go:334] "Generic (PLEG): container finished" podID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerID="0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3" exitCode=0 Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.353170 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcbv2" event={"ID":"676a3693-965b-4dfa-ae59-b7d020bbbbf4","Type":"ContainerDied","Data":"0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3"} Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.353181 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcbv2" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.353230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcbv2" event={"ID":"676a3693-965b-4dfa-ae59-b7d020bbbbf4","Type":"ContainerDied","Data":"6d5bd3b3d5611270e4f95938c879b2c5984a8139be1b7340e8dd4a7a1d8eecde"} Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.353256 4688 scope.go:117] "RemoveContainer" containerID="0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.372117 4688 scope.go:117] "RemoveContainer" containerID="e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.394959 4688 scope.go:117] "RemoveContainer" containerID="6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.395243 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wf8v\" (UniqueName: \"kubernetes.io/projected/676a3693-965b-4dfa-ae59-b7d020bbbbf4-kube-api-access-8wf8v\") pod \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.395298 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-catalog-content\") pod \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.395408 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-utilities\") pod \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\" (UID: \"676a3693-965b-4dfa-ae59-b7d020bbbbf4\") " Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.396294 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-utilities" (OuterVolumeSpecName: "utilities") pod "676a3693-965b-4dfa-ae59-b7d020bbbbf4" (UID: "676a3693-965b-4dfa-ae59-b7d020bbbbf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.397722 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.403943 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676a3693-965b-4dfa-ae59-b7d020bbbbf4-kube-api-access-8wf8v" (OuterVolumeSpecName: "kube-api-access-8wf8v") pod "676a3693-965b-4dfa-ae59-b7d020bbbbf4" (UID: "676a3693-965b-4dfa-ae59-b7d020bbbbf4"). InnerVolumeSpecName "kube-api-access-8wf8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.441303 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "676a3693-965b-4dfa-ae59-b7d020bbbbf4" (UID: "676a3693-965b-4dfa-ae59-b7d020bbbbf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.504659 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wf8v\" (UniqueName: \"kubernetes.io/projected/676a3693-965b-4dfa-ae59-b7d020bbbbf4-kube-api-access-8wf8v\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.504692 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676a3693-965b-4dfa-ae59-b7d020bbbbf4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.506793 4688 scope.go:117] "RemoveContainer" containerID="0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3" Jan 23 19:24:52 crc kubenswrapper[4688]: E0123 19:24:52.507229 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3\": container with ID starting with 0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3 not found: ID does not exist" containerID="0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.507295 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3"} err="failed to get container status \"0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3\": rpc error: code = NotFound desc = could not find container \"0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3\": container with ID starting with 0076e3926fbed7ce598850b052124a809feecd3fc8aceb599d4666db429bc9d3 not found: ID does not exist" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.507333 4688 scope.go:117] "RemoveContainer" containerID="e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66" Jan 23 19:24:52 crc kubenswrapper[4688]: E0123 19:24:52.507638 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66\": container with ID starting with e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66 not found: ID does not exist" containerID="e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.507669 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66"} err="failed to get container status \"e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66\": rpc error: code = NotFound desc = could not find container \"e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66\": container with ID starting with e23cf12f25a67886331edc747a12f4ab53afe2cdce71bcf435a1cf89eb952b66 not found: ID does not exist" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.507690 4688 scope.go:117] "RemoveContainer" containerID="6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6" Jan 23 19:24:52 crc kubenswrapper[4688]: E0123 19:24:52.507890 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6\": container with ID starting with 6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6 not found: ID does not exist" containerID="6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.507908 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6"} err="failed to get container status \"6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6\": rpc error: code = NotFound desc = could not find container \"6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6\": container with ID starting with 6af570a38693fe642f672eef1c70b1b6a2f73474d8f32b9b96e83e999bb259d6 not found: ID does not exist" Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.689457 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bcbv2"] Jan 23 19:24:52 crc kubenswrapper[4688]: I0123 19:24:52.698519 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bcbv2"] Jan 23 19:24:53 crc kubenswrapper[4688]: I0123 19:24:53.370340 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" path="/var/lib/kubelet/pods/676a3693-965b-4dfa-ae59-b7d020bbbbf4/volumes" Jan 23 19:24:59 crc kubenswrapper[4688]: I0123 19:24:59.357116 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:24:59 crc kubenswrapper[4688]: E0123 19:24:59.357769 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:25:12 crc kubenswrapper[4688]: I0123 19:25:12.355948 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:25:12 crc kubenswrapper[4688]: E0123 19:25:12.356733 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:25:27 crc kubenswrapper[4688]: I0123 19:25:27.357080 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:25:27 crc kubenswrapper[4688]: E0123 19:25:27.357932 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:25:42 crc kubenswrapper[4688]: I0123 19:25:42.356761 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:25:42 crc kubenswrapper[4688]: E0123 19:25:42.357711 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.878997 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vqf"] Jan 23 19:25:45 crc kubenswrapper[4688]: E0123 19:25:45.880352 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="registry-server" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.880373 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="registry-server" Jan 23 19:25:45 crc kubenswrapper[4688]: E0123 19:25:45.880389 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="extract-utilities" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.880427 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="extract-utilities" Jan 23 19:25:45 crc kubenswrapper[4688]: E0123 19:25:45.880492 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="extract-content" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.880505 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="extract-content" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.880798 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="676a3693-965b-4dfa-ae59-b7d020bbbbf4" containerName="registry-server" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.882982 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.899416 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vqf"] Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.998174 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-utilities\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.998249 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfl4g\" (UniqueName: \"kubernetes.io/projected/20ebf883-c0b7-4b6b-8a70-b764037f36f4-kube-api-access-dfl4g\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:45 crc kubenswrapper[4688]: I0123 19:25:45.998277 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-catalog-content\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.100146 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-utilities\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.100212 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfl4g\" (UniqueName: \"kubernetes.io/projected/20ebf883-c0b7-4b6b-8a70-b764037f36f4-kube-api-access-dfl4g\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.100230 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-catalog-content\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.100674 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-catalog-content\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.100715 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-utilities\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.121840 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfl4g\" (UniqueName: \"kubernetes.io/projected/20ebf883-c0b7-4b6b-8a70-b764037f36f4-kube-api-access-dfl4g\") pod \"redhat-marketplace-m4vqf\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.212333 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.667335 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vqf"] Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.887339 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerStarted","Data":"6ab7723161ac8a59ead228100be9174eee83210d6741ef8d79b2392eb3ae310c"} Jan 23 19:25:46 crc kubenswrapper[4688]: I0123 19:25:46.887401 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerStarted","Data":"0ff7d21a179f2050778a555b9c9724a5d200bea36b7beb3cb235f5c4a62dc0f7"} Jan 23 19:25:47 crc kubenswrapper[4688]: I0123 19:25:47.897692 4688 generic.go:334] "Generic (PLEG): container finished" podID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerID="6ab7723161ac8a59ead228100be9174eee83210d6741ef8d79b2392eb3ae310c" exitCode=0 Jan 23 19:25:47 crc kubenswrapper[4688]: I0123 19:25:47.897749 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerDied","Data":"6ab7723161ac8a59ead228100be9174eee83210d6741ef8d79b2392eb3ae310c"} Jan 23 19:25:48 crc kubenswrapper[4688]: I0123 19:25:48.906440 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerStarted","Data":"dda96d44259b4d3b82699322d369f39dd147509e0cea536fb2c541c5fbb972c0"} Jan 23 19:25:49 crc kubenswrapper[4688]: I0123 19:25:49.925620 4688 generic.go:334] "Generic (PLEG): container finished" podID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerID="dda96d44259b4d3b82699322d369f39dd147509e0cea536fb2c541c5fbb972c0" exitCode=0 Jan 23 19:25:49 crc kubenswrapper[4688]: I0123 19:25:49.925694 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerDied","Data":"dda96d44259b4d3b82699322d369f39dd147509e0cea536fb2c541c5fbb972c0"} Jan 23 19:25:53 crc kubenswrapper[4688]: I0123 19:25:53.963997 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerStarted","Data":"a1cf8fc5f0618afd4ab817b162781c98493154027c06833c4683415bd701efce"} Jan 23 19:25:53 crc kubenswrapper[4688]: I0123 19:25:53.987674 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4vqf" podStartSLOduration=4.001005654 podStartE2EDuration="8.987655357s" podCreationTimestamp="2026-01-23 19:25:45 +0000 UTC" firstStartedPulling="2026-01-23 19:25:47.900267329 +0000 UTC m=+4742.896091770" lastFinishedPulling="2026-01-23 19:25:52.886917032 +0000 UTC m=+4747.882741473" observedRunningTime="2026-01-23 19:25:53.9842781 +0000 UTC m=+4748.980102561" watchObservedRunningTime="2026-01-23 19:25:53.987655357 +0000 UTC m=+4748.983479798" Jan 23 19:25:56 crc kubenswrapper[4688]: I0123 19:25:56.212877 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:56 crc kubenswrapper[4688]: I0123 19:25:56.214008 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:56 crc kubenswrapper[4688]: I0123 19:25:56.269120 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:56 crc kubenswrapper[4688]: I0123 19:25:56.356239 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:25:56 crc kubenswrapper[4688]: E0123 19:25:56.356552 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:25:58 crc kubenswrapper[4688]: I0123 19:25:58.047396 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:25:59 crc kubenswrapper[4688]: I0123 19:25:59.807640 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vqf"] Jan 23 19:26:00 crc kubenswrapper[4688]: I0123 19:26:00.019275 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m4vqf" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="registry-server" containerID="cri-o://a1cf8fc5f0618afd4ab817b162781c98493154027c06833c4683415bd701efce" gracePeriod=2 Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.031331 4688 generic.go:334] "Generic (PLEG): container finished" podID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerID="a1cf8fc5f0618afd4ab817b162781c98493154027c06833c4683415bd701efce" exitCode=0 Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.031366 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerDied","Data":"a1cf8fc5f0618afd4ab817b162781c98493154027c06833c4683415bd701efce"} Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.308023 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.388832 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-utilities\") pod \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.388986 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-catalog-content\") pod \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.389115 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfl4g\" (UniqueName: \"kubernetes.io/projected/20ebf883-c0b7-4b6b-8a70-b764037f36f4-kube-api-access-dfl4g\") pod \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\" (UID: \"20ebf883-c0b7-4b6b-8a70-b764037f36f4\") " Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.390446 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-utilities" (OuterVolumeSpecName: "utilities") pod "20ebf883-c0b7-4b6b-8a70-b764037f36f4" (UID: "20ebf883-c0b7-4b6b-8a70-b764037f36f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.398538 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.401259 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ebf883-c0b7-4b6b-8a70-b764037f36f4-kube-api-access-dfl4g" (OuterVolumeSpecName: "kube-api-access-dfl4g") pod "20ebf883-c0b7-4b6b-8a70-b764037f36f4" (UID: "20ebf883-c0b7-4b6b-8a70-b764037f36f4"). InnerVolumeSpecName "kube-api-access-dfl4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.419768 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20ebf883-c0b7-4b6b-8a70-b764037f36f4" (UID: "20ebf883-c0b7-4b6b-8a70-b764037f36f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.500856 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20ebf883-c0b7-4b6b-8a70-b764037f36f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:26:01 crc kubenswrapper[4688]: I0123 19:26:01.501076 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfl4g\" (UniqueName: \"kubernetes.io/projected/20ebf883-c0b7-4b6b-8a70-b764037f36f4-kube-api-access-dfl4g\") on node \"crc\" DevicePath \"\"" Jan 23 19:26:02 crc kubenswrapper[4688]: I0123 19:26:02.042449 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4vqf" event={"ID":"20ebf883-c0b7-4b6b-8a70-b764037f36f4","Type":"ContainerDied","Data":"0ff7d21a179f2050778a555b9c9724a5d200bea36b7beb3cb235f5c4a62dc0f7"} Jan 23 19:26:02 crc kubenswrapper[4688]: I0123 19:26:02.042509 4688 scope.go:117] "RemoveContainer" containerID="a1cf8fc5f0618afd4ab817b162781c98493154027c06833c4683415bd701efce" Jan 23 19:26:02 crc kubenswrapper[4688]: I0123 19:26:02.042528 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4vqf" Jan 23 19:26:02 crc kubenswrapper[4688]: I0123 19:26:02.086438 4688 scope.go:117] "RemoveContainer" containerID="dda96d44259b4d3b82699322d369f39dd147509e0cea536fb2c541c5fbb972c0" Jan 23 19:26:02 crc kubenswrapper[4688]: I0123 19:26:02.103856 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vqf"] Jan 23 19:26:02 crc kubenswrapper[4688]: I0123 19:26:02.112362 4688 scope.go:117] "RemoveContainer" containerID="6ab7723161ac8a59ead228100be9174eee83210d6741ef8d79b2392eb3ae310c" Jan 23 19:26:02 crc kubenswrapper[4688]: I0123 19:26:02.118912 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4vqf"] Jan 23 19:26:03 crc kubenswrapper[4688]: I0123 19:26:03.368348 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" path="/var/lib/kubelet/pods/20ebf883-c0b7-4b6b-8a70-b764037f36f4/volumes" Jan 23 19:26:10 crc kubenswrapper[4688]: I0123 19:26:10.356293 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:26:10 crc kubenswrapper[4688]: E0123 19:26:10.358491 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:26:21 crc kubenswrapper[4688]: I0123 19:26:21.361594 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:26:21 crc kubenswrapper[4688]: E0123 19:26:21.362381 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:26:33 crc kubenswrapper[4688]: I0123 19:26:33.356605 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:26:33 crc kubenswrapper[4688]: E0123 19:26:33.357406 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:26:45 crc kubenswrapper[4688]: I0123 19:26:45.357480 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:26:45 crc kubenswrapper[4688]: E0123 19:26:45.358107 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:26:57 crc kubenswrapper[4688]: I0123 19:26:57.357401 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:26:57 crc kubenswrapper[4688]: E0123 19:26:57.358333 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:27:10 crc kubenswrapper[4688]: I0123 19:27:10.356942 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:27:10 crc kubenswrapper[4688]: E0123 19:27:10.357667 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:27:24 crc kubenswrapper[4688]: I0123 19:27:24.356335 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:27:24 crc kubenswrapper[4688]: E0123 19:27:24.357055 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:27:36 crc kubenswrapper[4688]: I0123 19:27:36.356780 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:27:36 crc kubenswrapper[4688]: E0123 19:27:36.357572 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:27:48 crc kubenswrapper[4688]: I0123 19:27:48.357979 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:27:48 crc kubenswrapper[4688]: E0123 19:27:48.360330 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:27:58 crc kubenswrapper[4688]: I0123 19:27:58.933324 4688 scope.go:117] "RemoveContainer" containerID="5181228dbf4c0b4c01d77faa590f45719fb14830064fa8ea6c9caea68e786921" Jan 23 19:27:58 crc kubenswrapper[4688]: I0123 19:27:58.959520 4688 scope.go:117] "RemoveContainer" containerID="bcfcefd9aa5a92bbaf3e1a20c08042976c93f49966bec8868ea3fb57ba5f99d7" Jan 23 19:27:59 crc kubenswrapper[4688]: I0123 19:27:59.018638 4688 scope.go:117] "RemoveContainer" containerID="82c6fc1055117fc73949215f3f44c34f51033feb16ff4a1827ad321d4af25ac9" Jan 23 19:28:02 crc kubenswrapper[4688]: I0123 19:28:02.357434 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:28:02 crc kubenswrapper[4688]: E0123 19:28:02.358427 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:28:15 crc kubenswrapper[4688]: I0123 19:28:15.364011 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:28:16 crc kubenswrapper[4688]: I0123 19:28:16.363899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"bc8ddd6066ff8419059260917f835e5d1b88095e82f76dcea3a78c14c7b798c7"} Jan 23 19:29:35 crc kubenswrapper[4688]: E0123 19:29:35.209894 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.213:44110->38.129.56.213:41963: write tcp 38.129.56.213:44110->38.129.56.213:41963: write: broken pipe Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.321100 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx"] Jan 23 19:30:00 crc kubenswrapper[4688]: E0123 19:30:00.322197 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="extract-content" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.322215 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="extract-content" Jan 23 19:30:00 crc kubenswrapper[4688]: E0123 19:30:00.322239 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="registry-server" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.322247 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="registry-server" Jan 23 19:30:00 crc kubenswrapper[4688]: E0123 19:30:00.322286 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="extract-utilities" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.322294 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="extract-utilities" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.322569 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ebf883-c0b7-4b6b-8a70-b764037f36f4" containerName="registry-server" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.343380 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx"] Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.343500 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.348269 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.348322 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.387521 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9b9z\" (UniqueName: \"kubernetes.io/projected/e17e4c86-d688-41f2-8fed-5f38af6048ff-kube-api-access-z9b9z\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.387699 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17e4c86-d688-41f2-8fed-5f38af6048ff-secret-volume\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.387838 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17e4c86-d688-41f2-8fed-5f38af6048ff-config-volume\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.489245 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17e4c86-d688-41f2-8fed-5f38af6048ff-secret-volume\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.489396 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17e4c86-d688-41f2-8fed-5f38af6048ff-config-volume\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.489514 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9b9z\" (UniqueName: \"kubernetes.io/projected/e17e4c86-d688-41f2-8fed-5f38af6048ff-kube-api-access-z9b9z\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.490429 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17e4c86-d688-41f2-8fed-5f38af6048ff-config-volume\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.768952 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9b9z\" (UniqueName: \"kubernetes.io/projected/e17e4c86-d688-41f2-8fed-5f38af6048ff-kube-api-access-z9b9z\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.780284 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17e4c86-d688-41f2-8fed-5f38af6048ff-secret-volume\") pod \"collect-profiles-29486610-h7pfx\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:00 crc kubenswrapper[4688]: I0123 19:30:00.974381 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:01 crc kubenswrapper[4688]: I0123 19:30:01.442091 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx"] Jan 23 19:30:02 crc kubenswrapper[4688]: I0123 19:30:02.358684 4688 generic.go:334] "Generic (PLEG): container finished" podID="e17e4c86-d688-41f2-8fed-5f38af6048ff" containerID="a8bd029ca639506226d5b0b19341ffc77e72d8bd38ad1678d9d1a3bade7d8cc2" exitCode=0 Jan 23 19:30:02 crc kubenswrapper[4688]: I0123 19:30:02.358747 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" event={"ID":"e17e4c86-d688-41f2-8fed-5f38af6048ff","Type":"ContainerDied","Data":"a8bd029ca639506226d5b0b19341ffc77e72d8bd38ad1678d9d1a3bade7d8cc2"} Jan 23 19:30:02 crc kubenswrapper[4688]: I0123 19:30:02.358995 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" event={"ID":"e17e4c86-d688-41f2-8fed-5f38af6048ff","Type":"ContainerStarted","Data":"090c748d6ae801234333435df4b41f066dd8ff0934c978e87bfa41070326ecc4"} Jan 23 19:30:03 crc kubenswrapper[4688]: I0123 19:30:03.822456 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:03 crc kubenswrapper[4688]: I0123 19:30:03.962704 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9b9z\" (UniqueName: \"kubernetes.io/projected/e17e4c86-d688-41f2-8fed-5f38af6048ff-kube-api-access-z9b9z\") pod \"e17e4c86-d688-41f2-8fed-5f38af6048ff\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " Jan 23 19:30:03 crc kubenswrapper[4688]: I0123 19:30:03.962809 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17e4c86-d688-41f2-8fed-5f38af6048ff-config-volume\") pod \"e17e4c86-d688-41f2-8fed-5f38af6048ff\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " Jan 23 19:30:03 crc kubenswrapper[4688]: I0123 19:30:03.962886 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17e4c86-d688-41f2-8fed-5f38af6048ff-secret-volume\") pod \"e17e4c86-d688-41f2-8fed-5f38af6048ff\" (UID: \"e17e4c86-d688-41f2-8fed-5f38af6048ff\") " Jan 23 19:30:03 crc kubenswrapper[4688]: I0123 19:30:03.963469 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e17e4c86-d688-41f2-8fed-5f38af6048ff-config-volume" (OuterVolumeSpecName: "config-volume") pod "e17e4c86-d688-41f2-8fed-5f38af6048ff" (UID: "e17e4c86-d688-41f2-8fed-5f38af6048ff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:30:03 crc kubenswrapper[4688]: I0123 19:30:03.970579 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17e4c86-d688-41f2-8fed-5f38af6048ff-kube-api-access-z9b9z" (OuterVolumeSpecName: "kube-api-access-z9b9z") pod "e17e4c86-d688-41f2-8fed-5f38af6048ff" (UID: "e17e4c86-d688-41f2-8fed-5f38af6048ff"). InnerVolumeSpecName "kube-api-access-z9b9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:30:03 crc kubenswrapper[4688]: I0123 19:30:03.970765 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e17e4c86-d688-41f2-8fed-5f38af6048ff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e17e4c86-d688-41f2-8fed-5f38af6048ff" (UID: "e17e4c86-d688-41f2-8fed-5f38af6048ff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.064843 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e17e4c86-d688-41f2-8fed-5f38af6048ff-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.065129 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e17e4c86-d688-41f2-8fed-5f38af6048ff-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.065143 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9b9z\" (UniqueName: \"kubernetes.io/projected/e17e4c86-d688-41f2-8fed-5f38af6048ff-kube-api-access-z9b9z\") on node \"crc\" DevicePath \"\"" Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.380213 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" event={"ID":"e17e4c86-d688-41f2-8fed-5f38af6048ff","Type":"ContainerDied","Data":"090c748d6ae801234333435df4b41f066dd8ff0934c978e87bfa41070326ecc4"} Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.380267 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="090c748d6ae801234333435df4b41f066dd8ff0934c978e87bfa41070326ecc4" Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.380520 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486610-h7pfx" Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.903839 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279"] Jan 23 19:30:04 crc kubenswrapper[4688]: I0123 19:30:04.914911 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486565-sf279"] Jan 23 19:30:05 crc kubenswrapper[4688]: I0123 19:30:05.377690 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfe9b0f-a662-4411-89cb-a14697aceaab" path="/var/lib/kubelet/pods/fdfe9b0f-a662-4411-89cb-a14697aceaab/volumes" Jan 23 19:30:36 crc kubenswrapper[4688]: I0123 19:30:36.965468 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:30:36 crc kubenswrapper[4688]: I0123 19:30:36.966101 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:30:59 crc kubenswrapper[4688]: I0123 19:30:59.130582 4688 scope.go:117] "RemoveContainer" containerID="bbba60f964453f0ff8906e9f1cae116efb16ed5965d6d3e9c3b139d48d22a113" Jan 23 19:31:06 crc kubenswrapper[4688]: I0123 19:31:06.965343 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:31:06 crc kubenswrapper[4688]: I0123 19:31:06.966037 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.522272 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvxrn"] Jan 23 19:31:12 crc kubenswrapper[4688]: E0123 19:31:12.523384 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e17e4c86-d688-41f2-8fed-5f38af6048ff" containerName="collect-profiles" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.523407 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17e4c86-d688-41f2-8fed-5f38af6048ff" containerName="collect-profiles" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.523701 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e17e4c86-d688-41f2-8fed-5f38af6048ff" containerName="collect-profiles" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.525590 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.535402 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvxrn"] Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.588063 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-catalog-content\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.588205 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-utilities\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.588246 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2b4\" (UniqueName: \"kubernetes.io/projected/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-kube-api-access-7s2b4\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.691171 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-catalog-content\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.691269 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-utilities\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.691296 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2b4\" (UniqueName: \"kubernetes.io/projected/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-kube-api-access-7s2b4\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.691892 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-catalog-content\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.691966 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-utilities\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.718281 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2b4\" (UniqueName: \"kubernetes.io/projected/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-kube-api-access-7s2b4\") pod \"community-operators-wvxrn\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:12 crc kubenswrapper[4688]: I0123 19:31:12.851319 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:13 crc kubenswrapper[4688]: I0123 19:31:13.429911 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvxrn"] Jan 23 19:31:14 crc kubenswrapper[4688]: I0123 19:31:14.146568 4688 generic.go:334] "Generic (PLEG): container finished" podID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerID="1aff89544d58992f77d27916ddc669635f71aaf013dc15de982b83bf385f0c5a" exitCode=0 Jan 23 19:31:14 crc kubenswrapper[4688]: I0123 19:31:14.146644 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxrn" event={"ID":"d1275eb1-a9e3-4443-8fa3-5e501ccd3021","Type":"ContainerDied","Data":"1aff89544d58992f77d27916ddc669635f71aaf013dc15de982b83bf385f0c5a"} Jan 23 19:31:14 crc kubenswrapper[4688]: I0123 19:31:14.146899 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxrn" event={"ID":"d1275eb1-a9e3-4443-8fa3-5e501ccd3021","Type":"ContainerStarted","Data":"4995464e451c42983ce6f86839160970b9fb6dca125ec341ad08508646cbd528"} Jan 23 19:31:14 crc kubenswrapper[4688]: I0123 19:31:14.149586 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:31:15 crc kubenswrapper[4688]: I0123 19:31:15.157532 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxrn" event={"ID":"d1275eb1-a9e3-4443-8fa3-5e501ccd3021","Type":"ContainerStarted","Data":"4d98323fbbd3c7551dacaa5caa2890723afbcaca084b1a75c3c16c3b0ad080db"} Jan 23 19:31:16 crc kubenswrapper[4688]: I0123 19:31:16.169114 4688 generic.go:334] "Generic (PLEG): container finished" podID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerID="4d98323fbbd3c7551dacaa5caa2890723afbcaca084b1a75c3c16c3b0ad080db" exitCode=0 Jan 23 19:31:16 crc kubenswrapper[4688]: I0123 19:31:16.169239 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxrn" event={"ID":"d1275eb1-a9e3-4443-8fa3-5e501ccd3021","Type":"ContainerDied","Data":"4d98323fbbd3c7551dacaa5caa2890723afbcaca084b1a75c3c16c3b0ad080db"} Jan 23 19:31:18 crc kubenswrapper[4688]: I0123 19:31:18.193440 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxrn" event={"ID":"d1275eb1-a9e3-4443-8fa3-5e501ccd3021","Type":"ContainerStarted","Data":"23c3d7c5f1baf32c023f2d18c2b38bef39d61b4d3ddaa645c30c5947796cd684"} Jan 23 19:31:18 crc kubenswrapper[4688]: I0123 19:31:18.220924 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvxrn" podStartSLOduration=2.863636303 podStartE2EDuration="6.220895637s" podCreationTimestamp="2026-01-23 19:31:12 +0000 UTC" firstStartedPulling="2026-01-23 19:31:14.149160055 +0000 UTC m=+5069.144984506" lastFinishedPulling="2026-01-23 19:31:17.506419399 +0000 UTC m=+5072.502243840" observedRunningTime="2026-01-23 19:31:18.212553017 +0000 UTC m=+5073.208377478" watchObservedRunningTime="2026-01-23 19:31:18.220895637 +0000 UTC m=+5073.216720078" Jan 23 19:31:22 crc kubenswrapper[4688]: I0123 19:31:22.851736 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:22 crc kubenswrapper[4688]: I0123 19:31:22.852349 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:22 crc kubenswrapper[4688]: I0123 19:31:22.913249 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:23 crc kubenswrapper[4688]: I0123 19:31:23.277017 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:23 crc kubenswrapper[4688]: I0123 19:31:23.326695 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvxrn"] Jan 23 19:31:25 crc kubenswrapper[4688]: I0123 19:31:25.252875 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wvxrn" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="registry-server" containerID="cri-o://23c3d7c5f1baf32c023f2d18c2b38bef39d61b4d3ddaa645c30c5947796cd684" gracePeriod=2 Jan 23 19:31:26 crc kubenswrapper[4688]: I0123 19:31:26.262359 4688 generic.go:334] "Generic (PLEG): container finished" podID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerID="23c3d7c5f1baf32c023f2d18c2b38bef39d61b4d3ddaa645c30c5947796cd684" exitCode=0 Jan 23 19:31:26 crc kubenswrapper[4688]: I0123 19:31:26.262411 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxrn" event={"ID":"d1275eb1-a9e3-4443-8fa3-5e501ccd3021","Type":"ContainerDied","Data":"23c3d7c5f1baf32c023f2d18c2b38bef39d61b4d3ddaa645c30c5947796cd684"} Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.192936 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.280756 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-catalog-content\") pod \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.280840 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvxrn" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.280856 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s2b4\" (UniqueName: \"kubernetes.io/projected/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-kube-api-access-7s2b4\") pod \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.280766 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvxrn" event={"ID":"d1275eb1-a9e3-4443-8fa3-5e501ccd3021","Type":"ContainerDied","Data":"4995464e451c42983ce6f86839160970b9fb6dca125ec341ad08508646cbd528"} Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.280948 4688 scope.go:117] "RemoveContainer" containerID="23c3d7c5f1baf32c023f2d18c2b38bef39d61b4d3ddaa645c30c5947796cd684" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.280976 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-utilities\") pod \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\" (UID: \"d1275eb1-a9e3-4443-8fa3-5e501ccd3021\") " Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.282546 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-utilities" (OuterVolumeSpecName: "utilities") pod "d1275eb1-a9e3-4443-8fa3-5e501ccd3021" (UID: "d1275eb1-a9e3-4443-8fa3-5e501ccd3021"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.297172 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-kube-api-access-7s2b4" (OuterVolumeSpecName: "kube-api-access-7s2b4") pod "d1275eb1-a9e3-4443-8fa3-5e501ccd3021" (UID: "d1275eb1-a9e3-4443-8fa3-5e501ccd3021"). InnerVolumeSpecName "kube-api-access-7s2b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.340794 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1275eb1-a9e3-4443-8fa3-5e501ccd3021" (UID: "d1275eb1-a9e3-4443-8fa3-5e501ccd3021"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.353898 4688 scope.go:117] "RemoveContainer" containerID="4d98323fbbd3c7551dacaa5caa2890723afbcaca084b1a75c3c16c3b0ad080db" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.383642 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.383931 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s2b4\" (UniqueName: \"kubernetes.io/projected/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-kube-api-access-7s2b4\") on node \"crc\" DevicePath \"\"" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.383945 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1275eb1-a9e3-4443-8fa3-5e501ccd3021-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.389931 4688 scope.go:117] "RemoveContainer" containerID="1aff89544d58992f77d27916ddc669635f71aaf013dc15de982b83bf385f0c5a" Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.609350 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvxrn"] Jan 23 19:31:27 crc kubenswrapper[4688]: I0123 19:31:27.617593 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wvxrn"] Jan 23 19:31:29 crc kubenswrapper[4688]: I0123 19:31:29.384459 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" path="/var/lib/kubelet/pods/d1275eb1-a9e3-4443-8fa3-5e501ccd3021/volumes" Jan 23 19:31:36 crc kubenswrapper[4688]: I0123 19:31:36.965582 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:31:36 crc kubenswrapper[4688]: I0123 19:31:36.967165 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:31:36 crc kubenswrapper[4688]: I0123 19:31:36.967339 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:31:36 crc kubenswrapper[4688]: I0123 19:31:36.968270 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc8ddd6066ff8419059260917f835e5d1b88095e82f76dcea3a78c14c7b798c7"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:31:36 crc kubenswrapper[4688]: I0123 19:31:36.968416 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://bc8ddd6066ff8419059260917f835e5d1b88095e82f76dcea3a78c14c7b798c7" gracePeriod=600 Jan 23 19:31:37 crc kubenswrapper[4688]: I0123 19:31:37.383408 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="bc8ddd6066ff8419059260917f835e5d1b88095e82f76dcea3a78c14c7b798c7" exitCode=0 Jan 23 19:31:37 crc kubenswrapper[4688]: I0123 19:31:37.383472 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"bc8ddd6066ff8419059260917f835e5d1b88095e82f76dcea3a78c14c7b798c7"} Jan 23 19:31:37 crc kubenswrapper[4688]: I0123 19:31:37.383741 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180"} Jan 23 19:31:37 crc kubenswrapper[4688]: I0123 19:31:37.383770 4688 scope.go:117] "RemoveContainer" containerID="40887347a6bc110fb246a7dd861161535a84e4623a472ee05762cf5d8b7f6aab" Jan 23 19:34:06 crc kubenswrapper[4688]: I0123 19:34:06.965972 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:34:06 crc kubenswrapper[4688]: I0123 19:34:06.966634 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:34:36 crc kubenswrapper[4688]: I0123 19:34:36.965659 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:34:36 crc kubenswrapper[4688]: I0123 19:34:36.966234 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:34:49 crc kubenswrapper[4688]: I0123 19:34:49.309226 4688 generic.go:334] "Generic (PLEG): container finished" podID="18226ae9-4f88-4376-a16d-b59b78912de7" containerID="b1d7f69f0f60e3abb32de44f107233e88cb609f88b141a5eaf012e37d3a5a9a0" exitCode=0 Jan 23 19:34:49 crc kubenswrapper[4688]: I0123 19:34:49.309409 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18226ae9-4f88-4376-a16d-b59b78912de7","Type":"ContainerDied","Data":"b1d7f69f0f60e3abb32de44f107233e88cb609f88b141a5eaf012e37d3a5a9a0"} Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.773655 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.848578 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.848706 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ca-certs\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.848783 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ssh-key\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.848822 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58g5h\" (UniqueName: \"kubernetes.io/projected/18226ae9-4f88-4376-a16d-b59b78912de7-kube-api-access-58g5h\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.848942 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-workdir\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.848997 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config-secret\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.849045 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-temporary\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.849079 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-config-data\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.849104 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config\") pod \"18226ae9-4f88-4376-a16d-b59b78912de7\" (UID: \"18226ae9-4f88-4376-a16d-b59b78912de7\") " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.849767 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.849969 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-config-data" (OuterVolumeSpecName: "config-data") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.850001 4688 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.855775 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.855822 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18226ae9-4f88-4376-a16d-b59b78912de7-kube-api-access-58g5h" (OuterVolumeSpecName: "kube-api-access-58g5h") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "kube-api-access-58g5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.876788 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.884378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.886119 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.908329 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.928336 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "18226ae9-4f88-4376-a16d-b59b78912de7" (UID: "18226ae9-4f88-4376-a16d-b59b78912de7"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952164 4688 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/18226ae9-4f88-4376-a16d-b59b78912de7-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952253 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952273 4688 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952290 4688 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18226ae9-4f88-4376-a16d-b59b78912de7-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952341 4688 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952355 4688 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952369 4688 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/18226ae9-4f88-4376-a16d-b59b78912de7-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.952382 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58g5h\" (UniqueName: \"kubernetes.io/projected/18226ae9-4f88-4376-a16d-b59b78912de7-kube-api-access-58g5h\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:50 crc kubenswrapper[4688]: I0123 19:34:50.975205 4688 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 23 19:34:51 crc kubenswrapper[4688]: I0123 19:34:51.058629 4688 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 23 19:34:51 crc kubenswrapper[4688]: I0123 19:34:51.338277 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"18226ae9-4f88-4376-a16d-b59b78912de7","Type":"ContainerDied","Data":"39155dfe65e97ef160356b779bb2f7fbb3d32e52eef7046e5b691e4a0eaeecdb"} Jan 23 19:34:51 crc kubenswrapper[4688]: I0123 19:34:51.338346 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39155dfe65e97ef160356b779bb2f7fbb3d32e52eef7046e5b691e4a0eaeecdb" Jan 23 19:34:51 crc kubenswrapper[4688]: I0123 19:34:51.338357 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.110556 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 19:34:55 crc kubenswrapper[4688]: E0123 19:34:55.111814 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="extract-content" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.111833 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="extract-content" Jan 23 19:34:55 crc kubenswrapper[4688]: E0123 19:34:55.111849 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="registry-server" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.111855 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="registry-server" Jan 23 19:34:55 crc kubenswrapper[4688]: E0123 19:34:55.111875 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="extract-utilities" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.111880 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="extract-utilities" Jan 23 19:34:55 crc kubenswrapper[4688]: E0123 19:34:55.111901 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18226ae9-4f88-4376-a16d-b59b78912de7" containerName="tempest-tests-tempest-tests-runner" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.111906 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="18226ae9-4f88-4376-a16d-b59b78912de7" containerName="tempest-tests-tempest-tests-runner" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.112102 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="18226ae9-4f88-4376-a16d-b59b78912de7" containerName="tempest-tests-tempest-tests-runner" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.112118 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1275eb1-a9e3-4443-8fa3-5e501ccd3021" containerName="registry-server" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.113229 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.117347 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-twb2t" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.124281 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.250705 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"53111825-5a43-4a5c-924a-39e6ded40854\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.251047 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qlvj\" (UniqueName: \"kubernetes.io/projected/53111825-5a43-4a5c-924a-39e6ded40854-kube-api-access-5qlvj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"53111825-5a43-4a5c-924a-39e6ded40854\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.353767 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qlvj\" (UniqueName: \"kubernetes.io/projected/53111825-5a43-4a5c-924a-39e6ded40854-kube-api-access-5qlvj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"53111825-5a43-4a5c-924a-39e6ded40854\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.353975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"53111825-5a43-4a5c-924a-39e6ded40854\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.354830 4688 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"53111825-5a43-4a5c-924a-39e6ded40854\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.383450 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qlvj\" (UniqueName: \"kubernetes.io/projected/53111825-5a43-4a5c-924a-39e6ded40854-kube-api-access-5qlvj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"53111825-5a43-4a5c-924a-39e6ded40854\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.385804 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"53111825-5a43-4a5c-924a-39e6ded40854\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.436951 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 19:34:55 crc kubenswrapper[4688]: I0123 19:34:55.928859 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 19:34:57 crc kubenswrapper[4688]: I0123 19:34:57.396611 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"53111825-5a43-4a5c-924a-39e6ded40854","Type":"ContainerStarted","Data":"19774f4234905183583781fc4df9e4c5983a6fa99b6fde5b56e4e6d2a09e93fb"} Jan 23 19:34:58 crc kubenswrapper[4688]: I0123 19:34:58.412116 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"53111825-5a43-4a5c-924a-39e6ded40854","Type":"ContainerStarted","Data":"99254b70c8a9f2dfd1a9c8de7c0f1a7d0ef997f17681f271ebd92ad6118f47f8"} Jan 23 19:34:58 crc kubenswrapper[4688]: I0123 19:34:58.428005 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.619481287 podStartE2EDuration="3.427984959s" podCreationTimestamp="2026-01-23 19:34:55 +0000 UTC" firstStartedPulling="2026-01-23 19:34:56.576663098 +0000 UTC m=+5291.572487559" lastFinishedPulling="2026-01-23 19:34:57.38516679 +0000 UTC m=+5292.380991231" observedRunningTime="2026-01-23 19:34:58.426789115 +0000 UTC m=+5293.422613566" watchObservedRunningTime="2026-01-23 19:34:58.427984959 +0000 UTC m=+5293.423809400" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.568324 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gk2q7"] Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.574939 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.582770 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gk2q7"] Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.698391 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-utilities\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.698761 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-catalog-content\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.698820 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbsc\" (UniqueName: \"kubernetes.io/projected/a99f64da-b49a-47e7-91da-b45d5df5d2df-kube-api-access-qpbsc\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.722489 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9hw7b"] Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.725244 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.739936 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hw7b"] Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.801080 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-utilities\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.801165 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-catalog-content\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.801212 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbsc\" (UniqueName: \"kubernetes.io/projected/a99f64da-b49a-47e7-91da-b45d5df5d2df-kube-api-access-qpbsc\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.801338 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdrw\" (UniqueName: \"kubernetes.io/projected/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-kube-api-access-ltdrw\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.801396 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-catalog-content\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.801478 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-utilities\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.802007 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-utilities\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.802331 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-catalog-content\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.827693 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbsc\" (UniqueName: \"kubernetes.io/projected/a99f64da-b49a-47e7-91da-b45d5df5d2df-kube-api-access-qpbsc\") pod \"redhat-operators-gk2q7\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.902840 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.903267 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-catalog-content\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.903732 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-catalog-content\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.903925 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-utilities\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.904053 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltdrw\" (UniqueName: \"kubernetes.io/projected/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-kube-api-access-ltdrw\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.904481 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-utilities\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:01 crc kubenswrapper[4688]: I0123 19:35:01.941889 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltdrw\" (UniqueName: \"kubernetes.io/projected/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-kube-api-access-ltdrw\") pod \"certified-operators-9hw7b\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:02 crc kubenswrapper[4688]: I0123 19:35:02.098974 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:02 crc kubenswrapper[4688]: I0123 19:35:02.414687 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gk2q7"] Jan 23 19:35:02 crc kubenswrapper[4688]: I0123 19:35:02.457378 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gk2q7" event={"ID":"a99f64da-b49a-47e7-91da-b45d5df5d2df","Type":"ContainerStarted","Data":"caded996630d0bb33ed5275bed84b69f5b1ae849cf5fd5a292070ea137c7da98"} Jan 23 19:35:02 crc kubenswrapper[4688]: I0123 19:35:02.651612 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9hw7b"] Jan 23 19:35:03 crc kubenswrapper[4688]: I0123 19:35:03.468179 4688 generic.go:334] "Generic (PLEG): container finished" podID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerID="60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1" exitCode=0 Jan 23 19:35:03 crc kubenswrapper[4688]: I0123 19:35:03.468473 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hw7b" event={"ID":"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba","Type":"ContainerDied","Data":"60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1"} Jan 23 19:35:03 crc kubenswrapper[4688]: I0123 19:35:03.468686 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hw7b" event={"ID":"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba","Type":"ContainerStarted","Data":"6142686a750c019bf7fc8705dee783dec92b1da9fe45f49543c775df561f0de4"} Jan 23 19:35:03 crc kubenswrapper[4688]: I0123 19:35:03.470778 4688 generic.go:334] "Generic (PLEG): container finished" podID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerID="b1d718c0023f2bb2056c6ac8eaf270594fa5e8ca56e8d993908bf9ddedbdade6" exitCode=0 Jan 23 19:35:03 crc kubenswrapper[4688]: I0123 19:35:03.470808 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gk2q7" event={"ID":"a99f64da-b49a-47e7-91da-b45d5df5d2df","Type":"ContainerDied","Data":"b1d718c0023f2bb2056c6ac8eaf270594fa5e8ca56e8d993908bf9ddedbdade6"} Jan 23 19:35:05 crc kubenswrapper[4688]: I0123 19:35:05.490233 4688 generic.go:334] "Generic (PLEG): container finished" podID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerID="da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1" exitCode=0 Jan 23 19:35:05 crc kubenswrapper[4688]: I0123 19:35:05.490326 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hw7b" event={"ID":"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba","Type":"ContainerDied","Data":"da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1"} Jan 23 19:35:05 crc kubenswrapper[4688]: I0123 19:35:05.494176 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gk2q7" event={"ID":"a99f64da-b49a-47e7-91da-b45d5df5d2df","Type":"ContainerStarted","Data":"5e06b62644f4d0b53fb5a84cd1142e8b0e04b2944feb2a51fc694cdc6737ae43"} Jan 23 19:35:06 crc kubenswrapper[4688]: I0123 19:35:06.975994 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:35:06 crc kubenswrapper[4688]: I0123 19:35:06.976434 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:35:06 crc kubenswrapper[4688]: I0123 19:35:06.976493 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:35:06 crc kubenswrapper[4688]: I0123 19:35:06.978617 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:35:06 crc kubenswrapper[4688]: I0123 19:35:06.978763 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" gracePeriod=600 Jan 23 19:35:08 crc kubenswrapper[4688]: I0123 19:35:08.522908 4688 generic.go:334] "Generic (PLEG): container finished" podID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerID="5e06b62644f4d0b53fb5a84cd1142e8b0e04b2944feb2a51fc694cdc6737ae43" exitCode=0 Jan 23 19:35:08 crc kubenswrapper[4688]: I0123 19:35:08.522972 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gk2q7" event={"ID":"a99f64da-b49a-47e7-91da-b45d5df5d2df","Type":"ContainerDied","Data":"5e06b62644f4d0b53fb5a84cd1142e8b0e04b2944feb2a51fc694cdc6737ae43"} Jan 23 19:35:10 crc kubenswrapper[4688]: E0123 19:35:10.850789 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.598662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hw7b" event={"ID":"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba","Type":"ContainerStarted","Data":"03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67"} Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.601763 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gk2q7" event={"ID":"a99f64da-b49a-47e7-91da-b45d5df5d2df","Type":"ContainerStarted","Data":"65cdc5fc29e078932c12576b1e83971e006825dd600a2932336afde883f26094"} Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.605129 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" exitCode=0 Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.605173 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180"} Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.605270 4688 scope.go:117] "RemoveContainer" containerID="bc8ddd6066ff8419059260917f835e5d1b88095e82f76dcea3a78c14c7b798c7" Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.606449 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:35:11 crc kubenswrapper[4688]: E0123 19:35:11.607039 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.621315 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9hw7b" podStartSLOduration=6.112408232 podStartE2EDuration="10.62129609s" podCreationTimestamp="2026-01-23 19:35:01 +0000 UTC" firstStartedPulling="2026-01-23 19:35:03.471920428 +0000 UTC m=+5298.467744869" lastFinishedPulling="2026-01-23 19:35:07.980808286 +0000 UTC m=+5302.976632727" observedRunningTime="2026-01-23 19:35:11.619161509 +0000 UTC m=+5306.614985960" watchObservedRunningTime="2026-01-23 19:35:11.62129609 +0000 UTC m=+5306.617120531" Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.654728 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gk2q7" podStartSLOduration=3.253879965 podStartE2EDuration="10.654699879s" podCreationTimestamp="2026-01-23 19:35:01 +0000 UTC" firstStartedPulling="2026-01-23 19:35:03.472562396 +0000 UTC m=+5298.468386837" lastFinishedPulling="2026-01-23 19:35:10.87338231 +0000 UTC m=+5305.869206751" observedRunningTime="2026-01-23 19:35:11.644913861 +0000 UTC m=+5306.640738302" watchObservedRunningTime="2026-01-23 19:35:11.654699879 +0000 UTC m=+5306.650524330" Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.903432 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:11 crc kubenswrapper[4688]: I0123 19:35:11.903497 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:12 crc kubenswrapper[4688]: I0123 19:35:12.100346 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:12 crc kubenswrapper[4688]: I0123 19:35:12.100399 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:12 crc kubenswrapper[4688]: I0123 19:35:12.960062 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gk2q7" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="registry-server" probeResult="failure" output=< Jan 23 19:35:12 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:35:12 crc kubenswrapper[4688]: > Jan 23 19:35:13 crc kubenswrapper[4688]: I0123 19:35:13.507654 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9hw7b" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="registry-server" probeResult="failure" output=< Jan 23 19:35:13 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:35:13 crc kubenswrapper[4688]: > Jan 23 19:35:21 crc kubenswrapper[4688]: I0123 19:35:21.974630 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.034769 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.207627 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gk2q7"] Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.640392 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.644509 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzrlq/must-gather-jrrhs"] Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.646841 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.650624 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vzrlq"/"default-dockercfg-n6p6t" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.651096 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vzrlq"/"kube-root-ca.crt" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.651454 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vzrlq"/"openshift-service-ca.crt" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.664699 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vzrlq/must-gather-jrrhs"] Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.716874 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.845178 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6042bb85-ccfd-4a48-a512-7997683d1570-must-gather-output\") pod \"must-gather-jrrhs\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.845267 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbml4\" (UniqueName: \"kubernetes.io/projected/6042bb85-ccfd-4a48-a512-7997683d1570-kube-api-access-zbml4\") pod \"must-gather-jrrhs\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.946921 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6042bb85-ccfd-4a48-a512-7997683d1570-must-gather-output\") pod \"must-gather-jrrhs\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.946976 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbml4\" (UniqueName: \"kubernetes.io/projected/6042bb85-ccfd-4a48-a512-7997683d1570-kube-api-access-zbml4\") pod \"must-gather-jrrhs\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.949748 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6042bb85-ccfd-4a48-a512-7997683d1570-must-gather-output\") pod \"must-gather-jrrhs\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.970104 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbml4\" (UniqueName: \"kubernetes.io/projected/6042bb85-ccfd-4a48-a512-7997683d1570-kube-api-access-zbml4\") pod \"must-gather-jrrhs\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:22 crc kubenswrapper[4688]: I0123 19:35:22.973677 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:35:23 crc kubenswrapper[4688]: I0123 19:35:23.505986 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vzrlq/must-gather-jrrhs"] Jan 23 19:35:23 crc kubenswrapper[4688]: I0123 19:35:23.724873 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" event={"ID":"6042bb85-ccfd-4a48-a512-7997683d1570","Type":"ContainerStarted","Data":"576e24581b909d6707706d270ac1a5ddf01e5151d2d3c9421e11a9fa4d36cc60"} Jan 23 19:35:23 crc kubenswrapper[4688]: I0123 19:35:23.725084 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gk2q7" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="registry-server" containerID="cri-o://65cdc5fc29e078932c12576b1e83971e006825dd600a2932336afde883f26094" gracePeriod=2 Jan 23 19:35:24 crc kubenswrapper[4688]: I0123 19:35:24.801765 4688 generic.go:334] "Generic (PLEG): container finished" podID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerID="65cdc5fc29e078932c12576b1e83971e006825dd600a2932336afde883f26094" exitCode=0 Jan 23 19:35:24 crc kubenswrapper[4688]: I0123 19:35:24.801813 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gk2q7" event={"ID":"a99f64da-b49a-47e7-91da-b45d5df5d2df","Type":"ContainerDied","Data":"65cdc5fc29e078932c12576b1e83971e006825dd600a2932336afde883f26094"} Jan 23 19:35:24 crc kubenswrapper[4688]: I0123 19:35:24.802998 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hw7b"] Jan 23 19:35:24 crc kubenswrapper[4688]: I0123 19:35:24.803290 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9hw7b" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="registry-server" containerID="cri-o://03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67" gracePeriod=2 Jan 23 19:35:24 crc kubenswrapper[4688]: I0123 19:35:24.970434 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.087411 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-utilities\") pod \"a99f64da-b49a-47e7-91da-b45d5df5d2df\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.087490 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-catalog-content\") pod \"a99f64da-b49a-47e7-91da-b45d5df5d2df\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.087701 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpbsc\" (UniqueName: \"kubernetes.io/projected/a99f64da-b49a-47e7-91da-b45d5df5d2df-kube-api-access-qpbsc\") pod \"a99f64da-b49a-47e7-91da-b45d5df5d2df\" (UID: \"a99f64da-b49a-47e7-91da-b45d5df5d2df\") " Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.088616 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-utilities" (OuterVolumeSpecName: "utilities") pod "a99f64da-b49a-47e7-91da-b45d5df5d2df" (UID: "a99f64da-b49a-47e7-91da-b45d5df5d2df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.094034 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99f64da-b49a-47e7-91da-b45d5df5d2df-kube-api-access-qpbsc" (OuterVolumeSpecName: "kube-api-access-qpbsc") pod "a99f64da-b49a-47e7-91da-b45d5df5d2df" (UID: "a99f64da-b49a-47e7-91da-b45d5df5d2df"). InnerVolumeSpecName "kube-api-access-qpbsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.191029 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.191063 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpbsc\" (UniqueName: \"kubernetes.io/projected/a99f64da-b49a-47e7-91da-b45d5df5d2df-kube-api-access-qpbsc\") on node \"crc\" DevicePath \"\"" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.212419 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a99f64da-b49a-47e7-91da-b45d5df5d2df" (UID: "a99f64da-b49a-47e7-91da-b45d5df5d2df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.292733 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a99f64da-b49a-47e7-91da-b45d5df5d2df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.750467 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.817806 4688 generic.go:334] "Generic (PLEG): container finished" podID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerID="03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67" exitCode=0 Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.817869 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9hw7b" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.817863 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hw7b" event={"ID":"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba","Type":"ContainerDied","Data":"03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67"} Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.817940 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9hw7b" event={"ID":"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba","Type":"ContainerDied","Data":"6142686a750c019bf7fc8705dee783dec92b1da9fe45f49543c775df561f0de4"} Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.818013 4688 scope.go:117] "RemoveContainer" containerID="03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.820922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gk2q7" event={"ID":"a99f64da-b49a-47e7-91da-b45d5df5d2df","Type":"ContainerDied","Data":"caded996630d0bb33ed5275bed84b69f5b1ae849cf5fd5a292070ea137c7da98"} Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.820992 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gk2q7" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.883015 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gk2q7"] Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.891890 4688 scope.go:117] "RemoveContainer" containerID="da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.898343 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gk2q7"] Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.905265 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-catalog-content\") pod \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.905325 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltdrw\" (UniqueName: \"kubernetes.io/projected/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-kube-api-access-ltdrw\") pod \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.905358 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-utilities\") pod \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\" (UID: \"6dade3f5-8dee-4aac-8906-6fb98fc6c5ba\") " Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.906593 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-utilities" (OuterVolumeSpecName: "utilities") pod "6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" (UID: "6dade3f5-8dee-4aac-8906-6fb98fc6c5ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.912838 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-kube-api-access-ltdrw" (OuterVolumeSpecName: "kube-api-access-ltdrw") pod "6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" (UID: "6dade3f5-8dee-4aac-8906-6fb98fc6c5ba"). InnerVolumeSpecName "kube-api-access-ltdrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.926695 4688 scope.go:117] "RemoveContainer" containerID="60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1" Jan 23 19:35:25 crc kubenswrapper[4688]: I0123 19:35:25.952414 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" (UID: "6dade3f5-8dee-4aac-8906-6fb98fc6c5ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:35:26 crc kubenswrapper[4688]: I0123 19:35:26.008072 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:35:26 crc kubenswrapper[4688]: I0123 19:35:26.008384 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltdrw\" (UniqueName: \"kubernetes.io/projected/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-kube-api-access-ltdrw\") on node \"crc\" DevicePath \"\"" Jan 23 19:35:26 crc kubenswrapper[4688]: I0123 19:35:26.008395 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:35:26 crc kubenswrapper[4688]: I0123 19:35:26.160854 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9hw7b"] Jan 23 19:35:26 crc kubenswrapper[4688]: I0123 19:35:26.171836 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9hw7b"] Jan 23 19:35:26 crc kubenswrapper[4688]: I0123 19:35:26.356534 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:35:26 crc kubenswrapper[4688]: E0123 19:35:26.356847 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:35:27 crc kubenswrapper[4688]: I0123 19:35:27.437426 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" path="/var/lib/kubelet/pods/6dade3f5-8dee-4aac-8906-6fb98fc6c5ba/volumes" Jan 23 19:35:27 crc kubenswrapper[4688]: I0123 19:35:27.438117 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" path="/var/lib/kubelet/pods/a99f64da-b49a-47e7-91da-b45d5df5d2df/volumes" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.850015 4688 scope.go:117] "RemoveContainer" containerID="03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67" Jan 23 19:35:30 crc kubenswrapper[4688]: E0123 19:35:30.850955 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67\": container with ID starting with 03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67 not found: ID does not exist" containerID="03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.851025 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67"} err="failed to get container status \"03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67\": rpc error: code = NotFound desc = could not find container \"03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67\": container with ID starting with 03cdfda6c429c3a1c610b83ea129b565fb76eb505602af6fdcbbf2b2774d6d67 not found: ID does not exist" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.851073 4688 scope.go:117] "RemoveContainer" containerID="da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1" Jan 23 19:35:30 crc kubenswrapper[4688]: E0123 19:35:30.851387 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1\": container with ID starting with da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1 not found: ID does not exist" containerID="da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.851411 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1"} err="failed to get container status \"da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1\": rpc error: code = NotFound desc = could not find container \"da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1\": container with ID starting with da2f0e8299e8395543a1fe41cbd45474507dbf93731a3127296d4c3018336ea1 not found: ID does not exist" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.851428 4688 scope.go:117] "RemoveContainer" containerID="60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1" Jan 23 19:35:30 crc kubenswrapper[4688]: E0123 19:35:30.851708 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1\": container with ID starting with 60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1 not found: ID does not exist" containerID="60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.851747 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1"} err="failed to get container status \"60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1\": rpc error: code = NotFound desc = could not find container \"60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1\": container with ID starting with 60687ad3148ec4c6b0ba403b783e32f1968d0e1467495d2987bd837ae5f404b1 not found: ID does not exist" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.851774 4688 scope.go:117] "RemoveContainer" containerID="65cdc5fc29e078932c12576b1e83971e006825dd600a2932336afde883f26094" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.887588 4688 scope.go:117] "RemoveContainer" containerID="5e06b62644f4d0b53fb5a84cd1142e8b0e04b2944feb2a51fc694cdc6737ae43" Jan 23 19:35:30 crc kubenswrapper[4688]: I0123 19:35:30.912679 4688 scope.go:117] "RemoveContainer" containerID="b1d718c0023f2bb2056c6ac8eaf270594fa5e8ca56e8d993908bf9ddedbdade6" Jan 23 19:35:31 crc kubenswrapper[4688]: I0123 19:35:31.990030 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" event={"ID":"6042bb85-ccfd-4a48-a512-7997683d1570","Type":"ContainerStarted","Data":"6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5"} Jan 23 19:35:31 crc kubenswrapper[4688]: I0123 19:35:31.990558 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" event={"ID":"6042bb85-ccfd-4a48-a512-7997683d1570","Type":"ContainerStarted","Data":"27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3"} Jan 23 19:35:32 crc kubenswrapper[4688]: I0123 19:35:32.009898 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" podStartSLOduration=2.596804644 podStartE2EDuration="10.009872146s" podCreationTimestamp="2026-01-23 19:35:22 +0000 UTC" firstStartedPulling="2026-01-23 19:35:23.518639302 +0000 UTC m=+5318.514463743" lastFinishedPulling="2026-01-23 19:35:30.931706804 +0000 UTC m=+5325.927531245" observedRunningTime="2026-01-23 19:35:32.004167614 +0000 UTC m=+5326.999992055" watchObservedRunningTime="2026-01-23 19:35:32.009872146 +0000 UTC m=+5327.005696587" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.544587 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-d6swx"] Jan 23 19:35:35 crc kubenswrapper[4688]: E0123 19:35:35.545631 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="extract-utilities" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545647 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="extract-utilities" Jan 23 19:35:35 crc kubenswrapper[4688]: E0123 19:35:35.545661 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="registry-server" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545667 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="registry-server" Jan 23 19:35:35 crc kubenswrapper[4688]: E0123 19:35:35.545686 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="registry-server" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545692 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="registry-server" Jan 23 19:35:35 crc kubenswrapper[4688]: E0123 19:35:35.545718 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="extract-utilities" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545724 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="extract-utilities" Jan 23 19:35:35 crc kubenswrapper[4688]: E0123 19:35:35.545737 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="extract-content" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545742 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="extract-content" Jan 23 19:35:35 crc kubenswrapper[4688]: E0123 19:35:35.545754 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="extract-content" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545760 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="extract-content" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545926 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a99f64da-b49a-47e7-91da-b45d5df5d2df" containerName="registry-server" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.545940 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dade3f5-8dee-4aac-8906-6fb98fc6c5ba" containerName="registry-server" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.546747 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.709599 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/763f2b71-f4de-47c9-9ec4-4756a8533eea-host\") pod \"crc-debug-d6swx\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.710294 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br47g\" (UniqueName: \"kubernetes.io/projected/763f2b71-f4de-47c9-9ec4-4756a8533eea-kube-api-access-br47g\") pod \"crc-debug-d6swx\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.811962 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/763f2b71-f4de-47c9-9ec4-4756a8533eea-host\") pod \"crc-debug-d6swx\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.812173 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/763f2b71-f4de-47c9-9ec4-4756a8533eea-host\") pod \"crc-debug-d6swx\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.812486 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br47g\" (UniqueName: \"kubernetes.io/projected/763f2b71-f4de-47c9-9ec4-4756a8533eea-kube-api-access-br47g\") pod \"crc-debug-d6swx\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.842749 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br47g\" (UniqueName: \"kubernetes.io/projected/763f2b71-f4de-47c9-9ec4-4756a8533eea-kube-api-access-br47g\") pod \"crc-debug-d6swx\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: I0123 19:35:35.866394 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:35:35 crc kubenswrapper[4688]: W0123 19:35:35.908454 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod763f2b71_f4de_47c9_9ec4_4756a8533eea.slice/crio-73f697d1dff19499f1b48d30f0a6e8e7b083caa147d93ee2b7d690b0165e7b73 WatchSource:0}: Error finding container 73f697d1dff19499f1b48d30f0a6e8e7b083caa147d93ee2b7d690b0165e7b73: Status 404 returned error can't find the container with id 73f697d1dff19499f1b48d30f0a6e8e7b083caa147d93ee2b7d690b0165e7b73 Jan 23 19:35:36 crc kubenswrapper[4688]: I0123 19:35:36.031389 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" event={"ID":"763f2b71-f4de-47c9-9ec4-4756a8533eea","Type":"ContainerStarted","Data":"73f697d1dff19499f1b48d30f0a6e8e7b083caa147d93ee2b7d690b0165e7b73"} Jan 23 19:35:40 crc kubenswrapper[4688]: I0123 19:35:40.356873 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:35:40 crc kubenswrapper[4688]: E0123 19:35:40.357627 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:35:51 crc kubenswrapper[4688]: E0123 19:35:51.051534 4688 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 23 19:35:51 crc kubenswrapper[4688]: E0123 19:35:51.052150 4688 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-br47g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-d6swx_openshift-must-gather-vzrlq(763f2b71-f4de-47c9-9ec4-4756a8533eea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 19:35:51 crc kubenswrapper[4688]: E0123 19:35:51.053443 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" podUID="763f2b71-f4de-47c9-9ec4-4756a8533eea" Jan 23 19:35:51 crc kubenswrapper[4688]: E0123 19:35:51.479623 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" podUID="763f2b71-f4de-47c9-9ec4-4756a8533eea" Jan 23 19:35:52 crc kubenswrapper[4688]: I0123 19:35:52.356878 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:35:52 crc kubenswrapper[4688]: E0123 19:35:52.357432 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:36:06 crc kubenswrapper[4688]: I0123 19:36:06.827230 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" event={"ID":"763f2b71-f4de-47c9-9ec4-4756a8533eea","Type":"ContainerStarted","Data":"990fd763b07327adbe400203e6fd5a7982f195315b064a73b0a390194a00a886"} Jan 23 19:36:06 crc kubenswrapper[4688]: I0123 19:36:06.851496 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" podStartSLOduration=1.8346989599999999 podStartE2EDuration="31.851476614s" podCreationTimestamp="2026-01-23 19:35:35 +0000 UTC" firstStartedPulling="2026-01-23 19:35:35.912294062 +0000 UTC m=+5330.908118493" lastFinishedPulling="2026-01-23 19:36:05.929071706 +0000 UTC m=+5360.924896147" observedRunningTime="2026-01-23 19:36:06.841259023 +0000 UTC m=+5361.837083464" watchObservedRunningTime="2026-01-23 19:36:06.851476614 +0000 UTC m=+5361.847301055" Jan 23 19:36:07 crc kubenswrapper[4688]: I0123 19:36:07.356982 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:36:07 crc kubenswrapper[4688]: E0123 19:36:07.357297 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:36:18 crc kubenswrapper[4688]: I0123 19:36:18.356430 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:36:18 crc kubenswrapper[4688]: E0123 19:36:18.357990 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.743982 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ht8sf"] Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.750802 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.756448 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht8sf"] Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.864230 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-catalog-content\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.864435 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4lch\" (UniqueName: \"kubernetes.io/projected/ac82f1ec-0e22-412b-be79-bf06e3da973b-kube-api-access-b4lch\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.864502 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-utilities\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.966902 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-utilities\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.967283 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-catalog-content\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.967393 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-utilities\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.967622 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4lch\" (UniqueName: \"kubernetes.io/projected/ac82f1ec-0e22-412b-be79-bf06e3da973b-kube-api-access-b4lch\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:25 crc kubenswrapper[4688]: I0123 19:36:25.967778 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-catalog-content\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:26 crc kubenswrapper[4688]: I0123 19:36:26.470336 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4lch\" (UniqueName: \"kubernetes.io/projected/ac82f1ec-0e22-412b-be79-bf06e3da973b-kube-api-access-b4lch\") pod \"redhat-marketplace-ht8sf\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:26 crc kubenswrapper[4688]: I0123 19:36:26.711862 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:27 crc kubenswrapper[4688]: W0123 19:36:27.284384 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac82f1ec_0e22_412b_be79_bf06e3da973b.slice/crio-642d23f0cc5ebaaeea174f8e7c876be0ea2a498dcb76d205fd6a69a33e4275a7 WatchSource:0}: Error finding container 642d23f0cc5ebaaeea174f8e7c876be0ea2a498dcb76d205fd6a69a33e4275a7: Status 404 returned error can't find the container with id 642d23f0cc5ebaaeea174f8e7c876be0ea2a498dcb76d205fd6a69a33e4275a7 Jan 23 19:36:27 crc kubenswrapper[4688]: I0123 19:36:27.289578 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht8sf"] Jan 23 19:36:28 crc kubenswrapper[4688]: I0123 19:36:28.008609 4688 generic.go:334] "Generic (PLEG): container finished" podID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerID="cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe" exitCode=0 Jan 23 19:36:28 crc kubenswrapper[4688]: I0123 19:36:28.008667 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht8sf" event={"ID":"ac82f1ec-0e22-412b-be79-bf06e3da973b","Type":"ContainerDied","Data":"cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe"} Jan 23 19:36:28 crc kubenswrapper[4688]: I0123 19:36:28.008993 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht8sf" event={"ID":"ac82f1ec-0e22-412b-be79-bf06e3da973b","Type":"ContainerStarted","Data":"642d23f0cc5ebaaeea174f8e7c876be0ea2a498dcb76d205fd6a69a33e4275a7"} Jan 23 19:36:28 crc kubenswrapper[4688]: I0123 19:36:28.011507 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:36:30 crc kubenswrapper[4688]: I0123 19:36:30.034024 4688 generic.go:334] "Generic (PLEG): container finished" podID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerID="19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8" exitCode=0 Jan 23 19:36:30 crc kubenswrapper[4688]: I0123 19:36:30.034697 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht8sf" event={"ID":"ac82f1ec-0e22-412b-be79-bf06e3da973b","Type":"ContainerDied","Data":"19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8"} Jan 23 19:36:31 crc kubenswrapper[4688]: I0123 19:36:31.045522 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht8sf" event={"ID":"ac82f1ec-0e22-412b-be79-bf06e3da973b","Type":"ContainerStarted","Data":"f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa"} Jan 23 19:36:31 crc kubenswrapper[4688]: I0123 19:36:31.065437 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ht8sf" podStartSLOduration=3.5501811119999998 podStartE2EDuration="6.065415175s" podCreationTimestamp="2026-01-23 19:36:25 +0000 UTC" firstStartedPulling="2026-01-23 19:36:28.011132446 +0000 UTC m=+5383.006956887" lastFinishedPulling="2026-01-23 19:36:30.526366509 +0000 UTC m=+5385.522190950" observedRunningTime="2026-01-23 19:36:31.063894642 +0000 UTC m=+5386.059719103" watchObservedRunningTime="2026-01-23 19:36:31.065415175 +0000 UTC m=+5386.061239616" Jan 23 19:36:33 crc kubenswrapper[4688]: I0123 19:36:33.356749 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:36:33 crc kubenswrapper[4688]: E0123 19:36:33.357529 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:36:36 crc kubenswrapper[4688]: I0123 19:36:36.711987 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:36 crc kubenswrapper[4688]: I0123 19:36:36.712746 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:36 crc kubenswrapper[4688]: I0123 19:36:36.767744 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:37 crc kubenswrapper[4688]: I0123 19:36:37.168002 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:37 crc kubenswrapper[4688]: I0123 19:36:37.222291 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht8sf"] Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.129366 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ht8sf" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="registry-server" containerID="cri-o://f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa" gracePeriod=2 Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.606317 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.719095 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-utilities\") pod \"ac82f1ec-0e22-412b-be79-bf06e3da973b\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.719475 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-catalog-content\") pod \"ac82f1ec-0e22-412b-be79-bf06e3da973b\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.719528 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4lch\" (UniqueName: \"kubernetes.io/projected/ac82f1ec-0e22-412b-be79-bf06e3da973b-kube-api-access-b4lch\") pod \"ac82f1ec-0e22-412b-be79-bf06e3da973b\" (UID: \"ac82f1ec-0e22-412b-be79-bf06e3da973b\") " Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.720283 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-utilities" (OuterVolumeSpecName: "utilities") pod "ac82f1ec-0e22-412b-be79-bf06e3da973b" (UID: "ac82f1ec-0e22-412b-be79-bf06e3da973b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.724669 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac82f1ec-0e22-412b-be79-bf06e3da973b-kube-api-access-b4lch" (OuterVolumeSpecName: "kube-api-access-b4lch") pod "ac82f1ec-0e22-412b-be79-bf06e3da973b" (UID: "ac82f1ec-0e22-412b-be79-bf06e3da973b"). InnerVolumeSpecName "kube-api-access-b4lch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.751378 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac82f1ec-0e22-412b-be79-bf06e3da973b" (UID: "ac82f1ec-0e22-412b-be79-bf06e3da973b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.823842 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.823885 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac82f1ec-0e22-412b-be79-bf06e3da973b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:36:39 crc kubenswrapper[4688]: I0123 19:36:39.823903 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4lch\" (UniqueName: \"kubernetes.io/projected/ac82f1ec-0e22-412b-be79-bf06e3da973b-kube-api-access-b4lch\") on node \"crc\" DevicePath \"\"" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.146749 4688 generic.go:334] "Generic (PLEG): container finished" podID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerID="f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa" exitCode=0 Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.146800 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht8sf" event={"ID":"ac82f1ec-0e22-412b-be79-bf06e3da973b","Type":"ContainerDied","Data":"f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa"} Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.146832 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ht8sf" event={"ID":"ac82f1ec-0e22-412b-be79-bf06e3da973b","Type":"ContainerDied","Data":"642d23f0cc5ebaaeea174f8e7c876be0ea2a498dcb76d205fd6a69a33e4275a7"} Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.146851 4688 scope.go:117] "RemoveContainer" containerID="f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.146923 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ht8sf" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.168441 4688 scope.go:117] "RemoveContainer" containerID="19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.187299 4688 scope.go:117] "RemoveContainer" containerID="cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.247915 4688 scope.go:117] "RemoveContainer" containerID="f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa" Jan 23 19:36:40 crc kubenswrapper[4688]: E0123 19:36:40.248595 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa\": container with ID starting with f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa not found: ID does not exist" containerID="f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.248714 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa"} err="failed to get container status \"f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa\": rpc error: code = NotFound desc = could not find container \"f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa\": container with ID starting with f9ad0d3770f73b4568fd5d7e0d72100972844996b9330597e0ff96df31505ffa not found: ID does not exist" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.248761 4688 scope.go:117] "RemoveContainer" containerID="19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8" Jan 23 19:36:40 crc kubenswrapper[4688]: E0123 19:36:40.249476 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8\": container with ID starting with 19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8 not found: ID does not exist" containerID="19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.249566 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8"} err="failed to get container status \"19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8\": rpc error: code = NotFound desc = could not find container \"19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8\": container with ID starting with 19306c13ec923afc47eb4d565ea4eb5f5dd55c909871efc735bb711a6be18ce8 not found: ID does not exist" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.249601 4688 scope.go:117] "RemoveContainer" containerID="cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe" Jan 23 19:36:40 crc kubenswrapper[4688]: E0123 19:36:40.252957 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe\": container with ID starting with cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe not found: ID does not exist" containerID="cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.253008 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe"} err="failed to get container status \"cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe\": rpc error: code = NotFound desc = could not find container \"cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe\": container with ID starting with cb39ec09f135ae3fedf63acb3f9baf5ca1e4e2f10c006a90f3a4e068597e61fe not found: ID does not exist" Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.253950 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht8sf"] Jan 23 19:36:40 crc kubenswrapper[4688]: I0123 19:36:40.263037 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ht8sf"] Jan 23 19:36:41 crc kubenswrapper[4688]: I0123 19:36:41.371755 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" path="/var/lib/kubelet/pods/ac82f1ec-0e22-412b-be79-bf06e3da973b/volumes" Jan 23 19:36:44 crc kubenswrapper[4688]: I0123 19:36:44.356532 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:36:44 crc kubenswrapper[4688]: E0123 19:36:44.357408 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:36:55 crc kubenswrapper[4688]: I0123 19:36:55.285930 4688 generic.go:334] "Generic (PLEG): container finished" podID="763f2b71-f4de-47c9-9ec4-4756a8533eea" containerID="990fd763b07327adbe400203e6fd5a7982f195315b064a73b0a390194a00a886" exitCode=0 Jan 23 19:36:55 crc kubenswrapper[4688]: I0123 19:36:55.286026 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" event={"ID":"763f2b71-f4de-47c9-9ec4-4756a8533eea","Type":"ContainerDied","Data":"990fd763b07327adbe400203e6fd5a7982f195315b064a73b0a390194a00a886"} Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.357345 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:36:56 crc kubenswrapper[4688]: E0123 19:36:56.358734 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.400899 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.436568 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-d6swx"] Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.447208 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-d6swx"] Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.518022 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/763f2b71-f4de-47c9-9ec4-4756a8533eea-host\") pod \"763f2b71-f4de-47c9-9ec4-4756a8533eea\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.518215 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br47g\" (UniqueName: \"kubernetes.io/projected/763f2b71-f4de-47c9-9ec4-4756a8533eea-kube-api-access-br47g\") pod \"763f2b71-f4de-47c9-9ec4-4756a8533eea\" (UID: \"763f2b71-f4de-47c9-9ec4-4756a8533eea\") " Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.518299 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/763f2b71-f4de-47c9-9ec4-4756a8533eea-host" (OuterVolumeSpecName: "host") pod "763f2b71-f4de-47c9-9ec4-4756a8533eea" (UID: "763f2b71-f4de-47c9-9ec4-4756a8533eea"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.518641 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/763f2b71-f4de-47c9-9ec4-4756a8533eea-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.525544 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763f2b71-f4de-47c9-9ec4-4756a8533eea-kube-api-access-br47g" (OuterVolumeSpecName: "kube-api-access-br47g") pod "763f2b71-f4de-47c9-9ec4-4756a8533eea" (UID: "763f2b71-f4de-47c9-9ec4-4756a8533eea"). InnerVolumeSpecName "kube-api-access-br47g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:36:56 crc kubenswrapper[4688]: I0123 19:36:56.620772 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br47g\" (UniqueName: \"kubernetes.io/projected/763f2b71-f4de-47c9-9ec4-4756a8533eea-kube-api-access-br47g\") on node \"crc\" DevicePath \"\"" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.311852 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f697d1dff19499f1b48d30f0a6e8e7b083caa147d93ee2b7d690b0165e7b73" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.311944 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-d6swx" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.373221 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763f2b71-f4de-47c9-9ec4-4756a8533eea" path="/var/lib/kubelet/pods/763f2b71-f4de-47c9-9ec4-4756a8533eea/volumes" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.601788 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-kh8wp"] Jan 23 19:36:57 crc kubenswrapper[4688]: E0123 19:36:57.602202 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="extract-utilities" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.602218 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="extract-utilities" Jan 23 19:36:57 crc kubenswrapper[4688]: E0123 19:36:57.602239 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="extract-content" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.602245 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="extract-content" Jan 23 19:36:57 crc kubenswrapper[4688]: E0123 19:36:57.602260 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="763f2b71-f4de-47c9-9ec4-4756a8533eea" containerName="container-00" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.602268 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="763f2b71-f4de-47c9-9ec4-4756a8533eea" containerName="container-00" Jan 23 19:36:57 crc kubenswrapper[4688]: E0123 19:36:57.602301 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="registry-server" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.602306 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="registry-server" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.602472 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac82f1ec-0e22-412b-be79-bf06e3da973b" containerName="registry-server" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.602507 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="763f2b71-f4de-47c9-9ec4-4756a8533eea" containerName="container-00" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.603151 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.639442 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrkfm\" (UniqueName: \"kubernetes.io/projected/f27778ba-ff98-4117-9b0d-15312ff808db-kube-api-access-vrkfm\") pod \"crc-debug-kh8wp\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.639870 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f27778ba-ff98-4117-9b0d-15312ff808db-host\") pod \"crc-debug-kh8wp\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.742760 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f27778ba-ff98-4117-9b0d-15312ff808db-host\") pod \"crc-debug-kh8wp\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.742956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrkfm\" (UniqueName: \"kubernetes.io/projected/f27778ba-ff98-4117-9b0d-15312ff808db-kube-api-access-vrkfm\") pod \"crc-debug-kh8wp\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.743397 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f27778ba-ff98-4117-9b0d-15312ff808db-host\") pod \"crc-debug-kh8wp\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.760077 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrkfm\" (UniqueName: \"kubernetes.io/projected/f27778ba-ff98-4117-9b0d-15312ff808db-kube-api-access-vrkfm\") pod \"crc-debug-kh8wp\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:57 crc kubenswrapper[4688]: I0123 19:36:57.919982 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:36:58 crc kubenswrapper[4688]: I0123 19:36:58.322269 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" event={"ID":"f27778ba-ff98-4117-9b0d-15312ff808db","Type":"ContainerStarted","Data":"6f753aa33e62fb644538cc4457950627a3ce92861cf5889312cac05d5680994d"} Jan 23 19:36:58 crc kubenswrapper[4688]: I0123 19:36:58.322648 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" event={"ID":"f27778ba-ff98-4117-9b0d-15312ff808db","Type":"ContainerStarted","Data":"04e5d8b99840d938478d0d1e557b50ee127e9cf1b6210e44c9dee8eb8239203f"} Jan 23 19:36:58 crc kubenswrapper[4688]: I0123 19:36:58.339274 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" podStartSLOduration=1.339255277 podStartE2EDuration="1.339255277s" podCreationTimestamp="2026-01-23 19:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:36:58.337475616 +0000 UTC m=+5413.333300047" watchObservedRunningTime="2026-01-23 19:36:58.339255277 +0000 UTC m=+5413.335079718" Jan 23 19:36:59 crc kubenswrapper[4688]: I0123 19:36:59.335543 4688 generic.go:334] "Generic (PLEG): container finished" podID="f27778ba-ff98-4117-9b0d-15312ff808db" containerID="6f753aa33e62fb644538cc4457950627a3ce92861cf5889312cac05d5680994d" exitCode=0 Jan 23 19:36:59 crc kubenswrapper[4688]: I0123 19:36:59.335618 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" event={"ID":"f27778ba-ff98-4117-9b0d-15312ff808db","Type":"ContainerDied","Data":"6f753aa33e62fb644538cc4457950627a3ce92861cf5889312cac05d5680994d"} Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.442997 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.490133 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrkfm\" (UniqueName: \"kubernetes.io/projected/f27778ba-ff98-4117-9b0d-15312ff808db-kube-api-access-vrkfm\") pod \"f27778ba-ff98-4117-9b0d-15312ff808db\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.490455 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f27778ba-ff98-4117-9b0d-15312ff808db-host\") pod \"f27778ba-ff98-4117-9b0d-15312ff808db\" (UID: \"f27778ba-ff98-4117-9b0d-15312ff808db\") " Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.490516 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f27778ba-ff98-4117-9b0d-15312ff808db-host" (OuterVolumeSpecName: "host") pod "f27778ba-ff98-4117-9b0d-15312ff808db" (UID: "f27778ba-ff98-4117-9b0d-15312ff808db"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.491315 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f27778ba-ff98-4117-9b0d-15312ff808db-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.495937 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27778ba-ff98-4117-9b0d-15312ff808db-kube-api-access-vrkfm" (OuterVolumeSpecName: "kube-api-access-vrkfm") pod "f27778ba-ff98-4117-9b0d-15312ff808db" (UID: "f27778ba-ff98-4117-9b0d-15312ff808db"). InnerVolumeSpecName "kube-api-access-vrkfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.592713 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrkfm\" (UniqueName: \"kubernetes.io/projected/f27778ba-ff98-4117-9b0d-15312ff808db-kube-api-access-vrkfm\") on node \"crc\" DevicePath \"\"" Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.825931 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-kh8wp"] Jan 23 19:37:00 crc kubenswrapper[4688]: I0123 19:37:00.834277 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-kh8wp"] Jan 23 19:37:01 crc kubenswrapper[4688]: I0123 19:37:01.358152 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-kh8wp" Jan 23 19:37:01 crc kubenswrapper[4688]: I0123 19:37:01.367508 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27778ba-ff98-4117-9b0d-15312ff808db" path="/var/lib/kubelet/pods/f27778ba-ff98-4117-9b0d-15312ff808db/volumes" Jan 23 19:37:01 crc kubenswrapper[4688]: I0123 19:37:01.368228 4688 scope.go:117] "RemoveContainer" containerID="6f753aa33e62fb644538cc4457950627a3ce92861cf5889312cac05d5680994d" Jan 23 19:37:01 crc kubenswrapper[4688]: I0123 19:37:01.985719 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-8n4z6"] Jan 23 19:37:01 crc kubenswrapper[4688]: E0123 19:37:01.986367 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27778ba-ff98-4117-9b0d-15312ff808db" containerName="container-00" Jan 23 19:37:01 crc kubenswrapper[4688]: I0123 19:37:01.986386 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27778ba-ff98-4117-9b0d-15312ff808db" containerName="container-00" Jan 23 19:37:01 crc kubenswrapper[4688]: I0123 19:37:01.986624 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27778ba-ff98-4117-9b0d-15312ff808db" containerName="container-00" Jan 23 19:37:01 crc kubenswrapper[4688]: I0123 19:37:01.987480 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.028149 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4444f-4e98-4a92-b305-2c038280ddc4-host\") pod \"crc-debug-8n4z6\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.028566 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h465\" (UniqueName: \"kubernetes.io/projected/e2e4444f-4e98-4a92-b305-2c038280ddc4-kube-api-access-5h465\") pod \"crc-debug-8n4z6\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.130631 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4444f-4e98-4a92-b305-2c038280ddc4-host\") pod \"crc-debug-8n4z6\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.130748 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h465\" (UniqueName: \"kubernetes.io/projected/e2e4444f-4e98-4a92-b305-2c038280ddc4-kube-api-access-5h465\") pod \"crc-debug-8n4z6\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.130782 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4444f-4e98-4a92-b305-2c038280ddc4-host\") pod \"crc-debug-8n4z6\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.149248 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h465\" (UniqueName: \"kubernetes.io/projected/e2e4444f-4e98-4a92-b305-2c038280ddc4-kube-api-access-5h465\") pod \"crc-debug-8n4z6\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.306089 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:02 crc kubenswrapper[4688]: W0123 19:37:02.331533 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e4444f_4e98_4a92_b305_2c038280ddc4.slice/crio-2385183263da5b9ccb8618997b539ae60c7d647a08e3b8eb03cb7b9d110b9fef WatchSource:0}: Error finding container 2385183263da5b9ccb8618997b539ae60c7d647a08e3b8eb03cb7b9d110b9fef: Status 404 returned error can't find the container with id 2385183263da5b9ccb8618997b539ae60c7d647a08e3b8eb03cb7b9d110b9fef Jan 23 19:37:02 crc kubenswrapper[4688]: I0123 19:37:02.375742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" event={"ID":"e2e4444f-4e98-4a92-b305-2c038280ddc4","Type":"ContainerStarted","Data":"2385183263da5b9ccb8618997b539ae60c7d647a08e3b8eb03cb7b9d110b9fef"} Jan 23 19:37:03 crc kubenswrapper[4688]: I0123 19:37:03.387675 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2e4444f-4e98-4a92-b305-2c038280ddc4" containerID="4200850bd52065b6ab816f649ea0c3a1c4dc441b0fc7b045e77cd382097ea04b" exitCode=0 Jan 23 19:37:03 crc kubenswrapper[4688]: I0123 19:37:03.387742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" event={"ID":"e2e4444f-4e98-4a92-b305-2c038280ddc4","Type":"ContainerDied","Data":"4200850bd52065b6ab816f649ea0c3a1c4dc441b0fc7b045e77cd382097ea04b"} Jan 23 19:37:03 crc kubenswrapper[4688]: I0123 19:37:03.431894 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-8n4z6"] Jan 23 19:37:03 crc kubenswrapper[4688]: I0123 19:37:03.441695 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzrlq/crc-debug-8n4z6"] Jan 23 19:37:04 crc kubenswrapper[4688]: I0123 19:37:04.535100 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:04 crc kubenswrapper[4688]: I0123 19:37:04.604900 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h465\" (UniqueName: \"kubernetes.io/projected/e2e4444f-4e98-4a92-b305-2c038280ddc4-kube-api-access-5h465\") pod \"e2e4444f-4e98-4a92-b305-2c038280ddc4\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " Jan 23 19:37:04 crc kubenswrapper[4688]: I0123 19:37:04.605043 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4444f-4e98-4a92-b305-2c038280ddc4-host\") pod \"e2e4444f-4e98-4a92-b305-2c038280ddc4\" (UID: \"e2e4444f-4e98-4a92-b305-2c038280ddc4\") " Jan 23 19:37:04 crc kubenswrapper[4688]: I0123 19:37:04.605161 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2e4444f-4e98-4a92-b305-2c038280ddc4-host" (OuterVolumeSpecName: "host") pod "e2e4444f-4e98-4a92-b305-2c038280ddc4" (UID: "e2e4444f-4e98-4a92-b305-2c038280ddc4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:37:04 crc kubenswrapper[4688]: I0123 19:37:04.605822 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2e4444f-4e98-4a92-b305-2c038280ddc4-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:37:04 crc kubenswrapper[4688]: I0123 19:37:04.623538 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e4444f-4e98-4a92-b305-2c038280ddc4-kube-api-access-5h465" (OuterVolumeSpecName: "kube-api-access-5h465") pod "e2e4444f-4e98-4a92-b305-2c038280ddc4" (UID: "e2e4444f-4e98-4a92-b305-2c038280ddc4"). InnerVolumeSpecName "kube-api-access-5h465". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:37:04 crc kubenswrapper[4688]: I0123 19:37:04.707535 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h465\" (UniqueName: \"kubernetes.io/projected/e2e4444f-4e98-4a92-b305-2c038280ddc4-kube-api-access-5h465\") on node \"crc\" DevicePath \"\"" Jan 23 19:37:05 crc kubenswrapper[4688]: I0123 19:37:05.369057 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e4444f-4e98-4a92-b305-2c038280ddc4" path="/var/lib/kubelet/pods/e2e4444f-4e98-4a92-b305-2c038280ddc4/volumes" Jan 23 19:37:05 crc kubenswrapper[4688]: I0123 19:37:05.409966 4688 scope.go:117] "RemoveContainer" containerID="4200850bd52065b6ab816f649ea0c3a1c4dc441b0fc7b045e77cd382097ea04b" Jan 23 19:37:05 crc kubenswrapper[4688]: I0123 19:37:05.410324 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/crc-debug-8n4z6" Jan 23 19:37:11 crc kubenswrapper[4688]: I0123 19:37:11.356758 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:37:11 crc kubenswrapper[4688]: E0123 19:37:11.357513 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:37:22 crc kubenswrapper[4688]: I0123 19:37:22.356411 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:37:22 crc kubenswrapper[4688]: E0123 19:37:22.357157 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:37:29 crc kubenswrapper[4688]: I0123 19:37:29.553664 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9696bf65d-hqqnw_26d17642-a159-4c56-85da-4ce111096230/barbican-api/0.log" Jan 23 19:37:29 crc kubenswrapper[4688]: I0123 19:37:29.755564 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9696bf65d-hqqnw_26d17642-a159-4c56-85da-4ce111096230/barbican-api-log/0.log" Jan 23 19:37:29 crc kubenswrapper[4688]: I0123 19:37:29.788974 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775f789f8-94pvr_69811c17-16d3-41e2-b891-6acdfeb480b0/barbican-keystone-listener/0.log" Jan 23 19:37:29 crc kubenswrapper[4688]: I0123 19:37:29.897512 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775f789f8-94pvr_69811c17-16d3-41e2-b891-6acdfeb480b0/barbican-keystone-listener-log/0.log" Jan 23 19:37:29 crc kubenswrapper[4688]: I0123 19:37:29.996726 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57fb8477df-2m7ng_c28c58c6-022f-44fc-878a-92a0ad162488/barbican-worker/0.log" Jan 23 19:37:30 crc kubenswrapper[4688]: I0123 19:37:30.051998 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57fb8477df-2m7ng_c28c58c6-022f-44fc-878a-92a0ad162488/barbican-worker-log/0.log" Jan 23 19:37:30 crc kubenswrapper[4688]: I0123 19:37:30.590032 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/ceilometer-notification-agent/0.log" Jan 23 19:37:30 crc kubenswrapper[4688]: I0123 19:37:30.617322 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2_fcefed39-8bf9-4782-8262-6616eee522f6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:30 crc kubenswrapper[4688]: I0123 19:37:30.633574 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/ceilometer-central-agent/0.log" Jan 23 19:37:30 crc kubenswrapper[4688]: I0123 19:37:30.879690 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/sg-core/0.log" Jan 23 19:37:30 crc kubenswrapper[4688]: I0123 19:37:30.880048 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/proxy-httpd/0.log" Jan 23 19:37:30 crc kubenswrapper[4688]: I0123 19:37:30.942758 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d04ebda-89c7-4c9c-9d26-280a6d1598f8/cinder-api/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.121052 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb86de93-e273-417f-8c60-8b6201635766/cinder-scheduler/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.134903 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d04ebda-89c7-4c9c-9d26-280a6d1598f8/cinder-api-log/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.232340 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb86de93-e273-417f-8c60-8b6201635766/probe/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.309073 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb_fc079b17-fa36-4e19-aac7-b8c309fa77e1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.489039 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8_45576589-fbbb-4556-9306-de4deba76388/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.563941 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-8gcp9_304eee98-817f-482f-88a4-0390cfa06ffc/init/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.752631 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-8gcp9_304eee98-817f-482f-88a4-0390cfa06ffc/init/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.834240 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l_0db8a4c7-1a83-44a3-a9b9-73868a2fe73e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:31 crc kubenswrapper[4688]: I0123 19:37:31.942696 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-8gcp9_304eee98-817f-482f-88a4-0390cfa06ffc/dnsmasq-dns/0.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.020997 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d00dfb95-d6b9-42c5-bd68-91cba08b97b4/glance-httpd/0.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.079637 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d00dfb95-d6b9-42c5-bd68-91cba08b97b4/glance-log/0.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.221896 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_aa9f2c9d-a6e3-43fb-9601-ce24f5e89417/glance-httpd/0.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.281017 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_aa9f2c9d-a6e3-43fb-9601-ce24f5e89417/glance-log/0.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.566605 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689f6b4f86-pbwfh_56f27597-f638-4b6d-84e9-3a3671c089ac/horizon/1.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.577710 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689f6b4f86-pbwfh_56f27597-f638-4b6d-84e9-3a3671c089ac/horizon/0.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.671815 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5_2cb10503-bf60-4049-a2b0-7299899692b0/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:32 crc kubenswrapper[4688]: I0123 19:37:32.840894 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p52gn_e2222dda-2ac5-4212-9cb1-bb87bc961472/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.109647 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486581-smm8p_c94d940e-9cfe-4bd3-bc70-fab5a68e0f20/keystone-cron/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.208292 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689f6b4f86-pbwfh_56f27597-f638-4b6d-84e9-3a3671c089ac/horizon-log/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.281912 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9ee596ca-3388-41b9-9651-b0f92e4b838c/kube-state-metrics/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.406989 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-788dd47598-8wt2n_cd02fba1-c4c0-4603-8801-92a63fa59f6a/keystone-api/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.443635 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4796d_30fe4fb5-c06c-4741-b83b-b5b6eef2603d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.898767 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b698d98c-7kjns_158df6c9-791b-411c-9405-74bf8eaa2995/neutron-httpd/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.924070 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj_f57f805b-6978-40eb-81c7-32d1ebde0a3f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:33 crc kubenswrapper[4688]: I0123 19:37:33.931386 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b698d98c-7kjns_158df6c9-791b-411c-9405-74bf8eaa2995/neutron-api/0.log" Jan 23 19:37:34 crc kubenswrapper[4688]: I0123 19:37:34.555654 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c7588894-f33b-452c-abfc-7576e58fbe4b/nova-cell0-conductor-conductor/0.log" Jan 23 19:37:34 crc kubenswrapper[4688]: I0123 19:37:34.867751 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_635921a5-2c42-44a0-8c9d-b1f9d5230145/nova-cell1-conductor-conductor/0.log" Jan 23 19:37:35 crc kubenswrapper[4688]: I0123 19:37:35.181250 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fe552058-5e47-429c-ac41-e315827552ab/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 19:37:35 crc kubenswrapper[4688]: I0123 19:37:35.206159 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e434f347-02aa-410e-a0c7-bcc65dee86ad/nova-api-log/0.log" Jan 23 19:37:35 crc kubenswrapper[4688]: I0123 19:37:35.388627 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-j64r4_b1183bb9-7531-4cbc-b0b8-c3df2ba56953/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:35 crc kubenswrapper[4688]: I0123 19:37:35.514878 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e434f347-02aa-410e-a0c7-bcc65dee86ad/nova-api-api/0.log" Jan 23 19:37:35 crc kubenswrapper[4688]: I0123 19:37:35.538017 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8cf51a7-6a79-4d01-8b66-036e1f113df2/nova-metadata-log/0.log" Jan 23 19:37:35 crc kubenswrapper[4688]: I0123 19:37:35.969284 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697e30b7-f8ce-45c0-8299-b6021b11a639/mysql-bootstrap/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.158517 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697e30b7-f8ce-45c0-8299-b6021b11a639/mysql-bootstrap/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.203965 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba03992a-5a6e-4f80-ad99-977cd7dc8854/nova-scheduler-scheduler/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.226268 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697e30b7-f8ce-45c0-8299-b6021b11a639/galera/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.356856 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:37:36 crc kubenswrapper[4688]: E0123 19:37:36.357170 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.454807 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c805a15-64d3-4320-940e-a6859affbf9c/mysql-bootstrap/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.613445 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c805a15-64d3-4320-940e-a6859affbf9c/mysql-bootstrap/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.695712 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c805a15-64d3-4320-940e-a6859affbf9c/galera/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.828940 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5043fc78-cadf-4542-8673-2a02149409f9/openstackclient/0.log" Jan 23 19:37:36 crc kubenswrapper[4688]: I0123 19:37:36.993565 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2mkcg_cb62b62e-86fd-434f-be45-f29d9ae27c76/openstack-network-exporter/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.192885 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovsdb-server-init/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.389683 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovs-vswitchd/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.403650 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovsdb-server/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.422415 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovsdb-server-init/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.639454 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zl7mq_c58b6a90-e622-44bd-824a-7bc35f16190e/ovn-controller/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.847426 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8cf51a7-6a79-4d01-8b66-036e1f113df2/nova-metadata-metadata/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.883509 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-288sf_2622f843-d555-43e1-b359-b490aab07eb2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:37 crc kubenswrapper[4688]: I0123 19:37:37.923354 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d4b65e4-7b44-449a-9505-c5bbc9f67c6c/openstack-network-exporter/0.log" Jan 23 19:37:38 crc kubenswrapper[4688]: I0123 19:37:38.051780 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d4b65e4-7b44-449a-9505-c5bbc9f67c6c/ovn-northd/0.log" Jan 23 19:37:38 crc kubenswrapper[4688]: I0123 19:37:38.660684 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ed6ebe9c-b75e-42b7-81ce-70c82b890fa4/openstack-network-exporter/0.log" Jan 23 19:37:38 crc kubenswrapper[4688]: I0123 19:37:38.690810 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ed6ebe9c-b75e-42b7-81ce-70c82b890fa4/ovsdbserver-nb/0.log" Jan 23 19:37:38 crc kubenswrapper[4688]: I0123 19:37:38.821451 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_11d2a676-bc2c-43fe-8195-8ae8300f7c8c/ovsdbserver-sb/0.log" Jan 23 19:37:38 crc kubenswrapper[4688]: I0123 19:37:38.849587 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_11d2a676-bc2c-43fe-8195-8ae8300f7c8c/openstack-network-exporter/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.147729 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df8898f5b-rfw5n_169bb621-8517-44d2-9193-1b75492e148f/placement-api/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.203730 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/init-config-reloader/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.303245 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df8898f5b-rfw5n_169bb621-8517-44d2-9193-1b75492e148f/placement-log/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.344823 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/init-config-reloader/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.376233 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/config-reloader/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.423283 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/prometheus/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.602351 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/thanos-sidecar/0.log" Jan 23 19:37:39 crc kubenswrapper[4688]: I0123 19:37:39.629013 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_29a2e74d-781b-4d79-ae54-7a37c75adee5/setup-container/0.log" Jan 23 19:37:40 crc kubenswrapper[4688]: I0123 19:37:40.373320 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9829e8b2-ebbc-4326-8a8d-2ceef863a9db/setup-container/0.log" Jan 23 19:37:40 crc kubenswrapper[4688]: I0123 19:37:40.378518 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_29a2e74d-781b-4d79-ae54-7a37c75adee5/setup-container/0.log" Jan 23 19:37:40 crc kubenswrapper[4688]: I0123 19:37:40.488618 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_29a2e74d-781b-4d79-ae54-7a37c75adee5/rabbitmq/0.log" Jan 23 19:37:40 crc kubenswrapper[4688]: I0123 19:37:40.650670 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9829e8b2-ebbc-4326-8a8d-2ceef863a9db/setup-container/0.log" Jan 23 19:37:40 crc kubenswrapper[4688]: I0123 19:37:40.672720 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9829e8b2-ebbc-4326-8a8d-2ceef863a9db/rabbitmq/0.log" Jan 23 19:37:40 crc kubenswrapper[4688]: I0123 19:37:40.711680 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd_7506d9ea-fa02-4f06-b654-bb7857357a6f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.023742 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ll67b_b11e8139-4a7d-4cda-8d54-0c88a360f046/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.128883 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk_d81fb34b-f44c-413e-af3a-2b6ed6f82fed/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.291436 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-f6tdm_90a8ac5e-520d-44bd-a129-ce6b0c0f2786/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.395095 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5kb66_45add2ba-c382-4807-8995-43514182b85a/ssh-known-hosts-edpm-deployment/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.664043 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c564cf675-l776t_8985e53c-d4f0-4f9a-96be-a540d7279676/proxy-server/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.805950 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vr6nh_d7367189-3db1-4176-8281-2b50a8b3df49/swift-ring-rebalance/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.850026 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c564cf675-l776t_8985e53c-d4f0-4f9a-96be-a540d7279676/proxy-httpd/0.log" Jan 23 19:37:41 crc kubenswrapper[4688]: I0123 19:37:41.969209 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-auditor/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.033780 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-reaper/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.148003 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-replicator/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.217287 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-server/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.269074 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-auditor/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.367169 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-replicator/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.386544 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-server/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.445132 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-updater/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.545862 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-expirer/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.568038 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-auditor/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.674739 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-replicator/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.756372 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-server/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.758518 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-updater/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.818404 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/rsync/0.log" Jan 23 19:37:42 crc kubenswrapper[4688]: I0123 19:37:42.937718 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/swift-recon-cron/0.log" Jan 23 19:37:43 crc kubenswrapper[4688]: I0123 19:37:43.050305 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx_fc299185-3ca0-4d2b-b24c-ab75fc65d49a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:43 crc kubenswrapper[4688]: I0123 19:37:43.222684 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_18226ae9-4f88-4376-a16d-b59b78912de7/tempest-tests-tempest-tests-runner/0.log" Jan 23 19:37:43 crc kubenswrapper[4688]: I0123 19:37:43.259177 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_53111825-5a43-4a5c-924a-39e6ded40854/test-operator-logs-container/0.log" Jan 23 19:37:43 crc kubenswrapper[4688]: I0123 19:37:43.473682 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-b4nck_e744642f-69d6-47a9-83a8-2cc90a504000/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:37:44 crc kubenswrapper[4688]: I0123 19:37:44.114925 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_245b0b2d-bf7c-4ac9-9fc3-f530a5cffead/watcher-applier/0.log" Jan 23 19:37:44 crc kubenswrapper[4688]: I0123 19:37:44.580340 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ded0f19f-c836-47bf-83f9-88634d30f76d/watcher-api-log/0.log" Jan 23 19:37:45 crc kubenswrapper[4688]: I0123 19:37:45.213999 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_0e25f1cb-df6e-441a-ba49-b8de51d05434/watcher-decision-engine/0.log" Jan 23 19:37:46 crc kubenswrapper[4688]: I0123 19:37:46.425641 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c/memcached/0.log" Jan 23 19:37:47 crc kubenswrapper[4688]: I0123 19:37:47.159926 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ded0f19f-c836-47bf-83f9-88634d30f76d/watcher-api/0.log" Jan 23 19:37:50 crc kubenswrapper[4688]: I0123 19:37:50.356882 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:37:50 crc kubenswrapper[4688]: E0123 19:37:50.357780 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:38:05 crc kubenswrapper[4688]: I0123 19:38:05.364914 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:38:05 crc kubenswrapper[4688]: E0123 19:38:05.365846 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:38:10 crc kubenswrapper[4688]: I0123 19:38:10.549644 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/util/0.log" Jan 23 19:38:10 crc kubenswrapper[4688]: I0123 19:38:10.804617 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/util/0.log" Jan 23 19:38:10 crc kubenswrapper[4688]: I0123 19:38:10.868699 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/pull/0.log" Jan 23 19:38:10 crc kubenswrapper[4688]: I0123 19:38:10.889771 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/pull/0.log" Jan 23 19:38:11 crc kubenswrapper[4688]: I0123 19:38:11.029443 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/util/0.log" Jan 23 19:38:11 crc kubenswrapper[4688]: I0123 19:38:11.071014 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/pull/0.log" Jan 23 19:38:11 crc kubenswrapper[4688]: I0123 19:38:11.135538 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/extract/0.log" Jan 23 19:38:11 crc kubenswrapper[4688]: I0123 19:38:11.517974 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-2qzlh_bd62301c-d101-483c-8fe3-a1a5eddee7fc/manager/0.log" Jan 23 19:38:11 crc kubenswrapper[4688]: I0123 19:38:11.539372 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rmt2k_9c6839a5-f543-42e6-8c94-7138c1200112/manager/0.log" Jan 23 19:38:12 crc kubenswrapper[4688]: I0123 19:38:12.324668 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-wz5qj_e9c016a5-4953-4944-9f6e-f086e5a70918/manager/0.log" Jan 23 19:38:12 crc kubenswrapper[4688]: I0123 19:38:12.509023 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-q56fh_9ac53122-55ee-4db4-ad7c-8369e5117efe/manager/0.log" Jan 23 19:38:12 crc kubenswrapper[4688]: I0123 19:38:12.539792 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-v4qgl_be846838-ce35-4c14-a0ea-3a501d4ef6ac/manager/0.log" Jan 23 19:38:12 crc kubenswrapper[4688]: I0123 19:38:12.695127 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wt2bv_e53011a2-ea48-49f2-afbc-0d4bf71ae725/manager/0.log" Jan 23 19:38:12 crc kubenswrapper[4688]: I0123 19:38:12.949361 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-ztl8x_30cd4339-ab66-45e3-937d-b3d9b5c3ef62/manager/0.log" Jan 23 19:38:13 crc kubenswrapper[4688]: I0123 19:38:13.022268 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-q4wv8_cae5b14f-5f7e-477f-a17a-9ad3930c6862/manager/0.log" Jan 23 19:38:13 crc kubenswrapper[4688]: I0123 19:38:13.213751 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-q6tnb_6daaa808-ea3a-43fb-bff1-285cf870df77/manager/0.log" Jan 23 19:38:13 crc kubenswrapper[4688]: I0123 19:38:13.275675 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-kjh92_b0ecc6d1-2625-4fba-860a-3931984ec27a/manager/0.log" Jan 23 19:38:13 crc kubenswrapper[4688]: I0123 19:38:13.453840 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6_4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f/manager/0.log" Jan 23 19:38:13 crc kubenswrapper[4688]: I0123 19:38:13.524622 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-47x6q_5e61a329-1ac1-4162-9d68-f3086ec3f16e/manager/0.log" Jan 23 19:38:14 crc kubenswrapper[4688]: I0123 19:38:14.124286 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-mq2kk_1232d539-d6e5-4aa6-ac00-36be9120b247/manager/0.log" Jan 23 19:38:14 crc kubenswrapper[4688]: I0123 19:38:14.166668 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-k2g2j_676572f9-6a9f-4a4e-ae4c-8d8d300bf02a/manager/0.log" Jan 23 19:38:14 crc kubenswrapper[4688]: I0123 19:38:14.363432 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854s7w97_af851c54-521b-4a32-95fd-df9fd55d2eee/manager/0.log" Jan 23 19:38:14 crc kubenswrapper[4688]: I0123 19:38:14.527332 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68b845cd55-nswgt_a7210d87-1894-4295-b8bd-0189ea05db2c/operator/0.log" Jan 23 19:38:14 crc kubenswrapper[4688]: I0123 19:38:14.704057 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-z2jjg_491f4103-b520-4b84-9f90-a2d21d168a7a/registry-server/0.log" Jan 23 19:38:14 crc kubenswrapper[4688]: I0123 19:38:14.903493 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6xgwb_f277821c-c358-4283-ad35-61b187fb0878/manager/0.log" Jan 23 19:38:14 crc kubenswrapper[4688]: I0123 19:38:14.980583 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-zk9c9_f53bddcc-3d14-4066-980c-dcfa14f2965e/manager/0.log" Jan 23 19:38:15 crc kubenswrapper[4688]: I0123 19:38:15.228312 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qlqcd_8d9bd4af-849d-417f-9bbd-8e661b88d557/operator/0.log" Jan 23 19:38:15 crc kubenswrapper[4688]: I0123 19:38:15.324168 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-9p6ps_b058c042-b4f7-4470-82ec-4f5336b47992/manager/0.log" Jan 23 19:38:15 crc kubenswrapper[4688]: I0123 19:38:15.706122 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-59bd4c58c8-qlfvx_d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc/manager/0.log" Jan 23 19:38:15 crc kubenswrapper[4688]: I0123 19:38:15.740714 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-l59kj_6e8fb123-6d73-47c6-9d23-930c6ba3de69/manager/0.log" Jan 23 19:38:15 crc kubenswrapper[4688]: I0123 19:38:15.857389 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-k6hng_55bb8a6a-0401-4cdc-92fb-595c5eeb5e55/manager/0.log" Jan 23 19:38:15 crc kubenswrapper[4688]: I0123 19:38:15.965840 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-679dc965c9-qrkxl_26066212-ab72-4450-b9b3-b08e6b43e333/manager/0.log" Jan 23 19:38:19 crc kubenswrapper[4688]: I0123 19:38:19.356138 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:38:19 crc kubenswrapper[4688]: E0123 19:38:19.357053 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:38:33 crc kubenswrapper[4688]: I0123 19:38:33.362125 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:38:33 crc kubenswrapper[4688]: E0123 19:38:33.362979 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:38:37 crc kubenswrapper[4688]: I0123 19:38:37.219993 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hdshg_4203f041-a5af-47a8-999b-329b617fe415/control-plane-machine-set-operator/0.log" Jan 23 19:38:37 crc kubenswrapper[4688]: I0123 19:38:37.454804 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mrcbl_10c46862-d70f-445e-82a8-f76c17326a8b/machine-api-operator/0.log" Jan 23 19:38:37 crc kubenswrapper[4688]: I0123 19:38:37.481216 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mrcbl_10c46862-d70f-445e-82a8-f76c17326a8b/kube-rbac-proxy/0.log" Jan 23 19:38:47 crc kubenswrapper[4688]: I0123 19:38:47.358221 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:38:47 crc kubenswrapper[4688]: E0123 19:38:47.359040 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:38:51 crc kubenswrapper[4688]: I0123 19:38:51.004075 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rsccw_9bf3e910-f2fd-4f92-b345-422c1570bd89/cert-manager-controller/0.log" Jan 23 19:38:51 crc kubenswrapper[4688]: I0123 19:38:51.176274 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vgqkz_5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e/cert-manager-cainjector/0.log" Jan 23 19:38:51 crc kubenswrapper[4688]: I0123 19:38:51.245428 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q4zqg_893e289a-c400-40f2-b2cd-a9815c0cf488/cert-manager-webhook/0.log" Jan 23 19:39:02 crc kubenswrapper[4688]: I0123 19:39:02.379323 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:39:02 crc kubenswrapper[4688]: E0123 19:39:02.401033 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:39:04 crc kubenswrapper[4688]: I0123 19:39:04.184488 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wcxg2_0ba8c497-753e-46c1-b423-cd7cd1b3616e/nmstate-console-plugin/0.log" Jan 23 19:39:04 crc kubenswrapper[4688]: I0123 19:39:04.425453 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4hkd8_a07585a4-2f3a-4062-9083-c64fcc9463a3/nmstate-handler/0.log" Jan 23 19:39:04 crc kubenswrapper[4688]: I0123 19:39:04.541210 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8dl6z_c65c520e-8672-463c-9337-3be6c949d06f/kube-rbac-proxy/0.log" Jan 23 19:39:04 crc kubenswrapper[4688]: I0123 19:39:04.646902 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8dl6z_c65c520e-8672-463c-9337-3be6c949d06f/nmstate-metrics/0.log" Jan 23 19:39:04 crc kubenswrapper[4688]: I0123 19:39:04.730292 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-l8trt_645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81/nmstate-operator/0.log" Jan 23 19:39:04 crc kubenswrapper[4688]: I0123 19:39:04.853367 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wzgkn_c43497a2-9efb-47c2-b161-88cfe2b1aabb/nmstate-webhook/0.log" Jan 23 19:39:16 crc kubenswrapper[4688]: I0123 19:39:16.356848 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:39:16 crc kubenswrapper[4688]: E0123 19:39:16.357651 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:39:20 crc kubenswrapper[4688]: I0123 19:39:20.152554 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fkspw_505c5412-6a67-4596-ae6a-bbd51d146126/prometheus-operator/0.log" Jan 23 19:39:20 crc kubenswrapper[4688]: I0123 19:39:20.331544 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h_587391e1-2b8a-40a1-9106-cdda7cb8a2bd/prometheus-operator-admission-webhook/0.log" Jan 23 19:39:20 crc kubenswrapper[4688]: I0123 19:39:20.369514 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f_318d598f-84d5-418c-b820-d7ade7fcc8de/prometheus-operator-admission-webhook/0.log" Jan 23 19:39:20 crc kubenswrapper[4688]: I0123 19:39:20.529280 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pgd8p_9cb38355-91e8-4856-abfa-b307e3f1909b/perses-operator/0.log" Jan 23 19:39:20 crc kubenswrapper[4688]: I0123 19:39:20.564044 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-86gvw_8f8e5732-68b1-4f4e-906c-303e1eb20baf/operator/0.log" Jan 23 19:39:27 crc kubenswrapper[4688]: I0123 19:39:27.357976 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:39:27 crc kubenswrapper[4688]: E0123 19:39:27.358767 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:39:34 crc kubenswrapper[4688]: I0123 19:39:34.470154 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-89xj6_f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc/kube-rbac-proxy/0.log" Jan 23 19:39:34 crc kubenswrapper[4688]: I0123 19:39:34.563098 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-89xj6_f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc/controller/0.log" Jan 23 19:39:34 crc kubenswrapper[4688]: I0123 19:39:34.734874 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:39:34 crc kubenswrapper[4688]: I0123 19:39:34.860309 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:39:34 crc kubenswrapper[4688]: I0123 19:39:34.876410 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:39:34 crc kubenswrapper[4688]: I0123 19:39:34.933660 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:39:34 crc kubenswrapper[4688]: I0123 19:39:34.961894 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.122417 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.123706 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.140676 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.165323 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.294142 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.296404 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.313018 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.348122 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/controller/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.476455 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/kube-rbac-proxy/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.477601 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/frr-metrics/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.551835 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/kube-rbac-proxy-frr/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.714997 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/reloader/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.759021 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8kldf_183de16f-fe88-4b85-9c1c-980569d0a89d/frr-k8s-webhook-server/0.log" Jan 23 19:39:35 crc kubenswrapper[4688]: I0123 19:39:35.976312 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-844488998d-d4vzw_1e8a4a5c-bbf0-404d-aada-461ca3e42d72/manager/0.log" Jan 23 19:39:36 crc kubenswrapper[4688]: I0123 19:39:36.156956 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6979454977-pw2fp_61d2f464-2eea-403d-a6e7-3a5bb3a067a5/webhook-server/0.log" Jan 23 19:39:36 crc kubenswrapper[4688]: I0123 19:39:36.236157 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zq5np_5950921c-c4d2-44ac-8fb9-853d22c0f04a/kube-rbac-proxy/0.log" Jan 23 19:39:36 crc kubenswrapper[4688]: I0123 19:39:36.902244 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zq5np_5950921c-c4d2-44ac-8fb9-853d22c0f04a/speaker/0.log" Jan 23 19:39:37 crc kubenswrapper[4688]: I0123 19:39:37.153956 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/frr/0.log" Jan 23 19:39:38 crc kubenswrapper[4688]: I0123 19:39:38.356878 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:39:38 crc kubenswrapper[4688]: E0123 19:39:38.357581 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:39:48 crc kubenswrapper[4688]: I0123 19:39:48.957364 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/util/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.214965 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/pull/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.288437 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/util/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.362003 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/pull/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.572903 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/extract/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.587003 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/pull/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.616510 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/util/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.748940 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/util/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.918464 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/pull/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.927111 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/util/0.log" Jan 23 19:39:49 crc kubenswrapper[4688]: I0123 19:39:49.953788 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/pull/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.105378 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/util/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.106038 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/pull/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.167463 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/extract/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.290072 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/util/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.356809 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:39:50 crc kubenswrapper[4688]: E0123 19:39:50.357101 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.437263 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/pull/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.441098 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/util/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.449036 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/pull/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.653914 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/extract/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.665435 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/util/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.672033 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/pull/0.log" Jan 23 19:39:50 crc kubenswrapper[4688]: I0123 19:39:50.838711 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-utilities/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.003760 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-utilities/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.010355 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-content/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.010391 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-content/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.171863 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-content/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.214805 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-utilities/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.388127 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-utilities/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.582742 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-utilities/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.605129 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-content/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.620381 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-content/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.847982 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-content/0.log" Jan 23 19:39:51 crc kubenswrapper[4688]: I0123 19:39:51.887981 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-utilities/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.168014 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4gqq5_f9495fe1-3e6a-410d-8628-ebd588169767/marketplace-operator/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.219678 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/registry-server/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.412482 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-utilities/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.559050 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-utilities/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.675078 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-content/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.692524 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-content/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.803267 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/registry-server/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.948552 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-utilities/0.log" Jan 23 19:39:52 crc kubenswrapper[4688]: I0123 19:39:52.971241 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-content/0.log" Jan 23 19:39:53 crc kubenswrapper[4688]: I0123 19:39:53.172711 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-utilities/0.log" Jan 23 19:39:53 crc kubenswrapper[4688]: I0123 19:39:53.183549 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/registry-server/0.log" Jan 23 19:39:53 crc kubenswrapper[4688]: I0123 19:39:53.403379 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-utilities/0.log" Jan 23 19:39:53 crc kubenswrapper[4688]: I0123 19:39:53.408529 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-content/0.log" Jan 23 19:39:53 crc kubenswrapper[4688]: I0123 19:39:53.417666 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-content/0.log" Jan 23 19:39:53 crc kubenswrapper[4688]: I0123 19:39:53.599331 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-utilities/0.log" Jan 23 19:39:53 crc kubenswrapper[4688]: I0123 19:39:53.608600 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-content/0.log" Jan 23 19:39:54 crc kubenswrapper[4688]: I0123 19:39:54.233673 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/registry-server/0.log" Jan 23 19:40:03 crc kubenswrapper[4688]: I0123 19:40:03.356170 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:40:03 crc kubenswrapper[4688]: E0123 19:40:03.356892 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:40:06 crc kubenswrapper[4688]: I0123 19:40:06.602120 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fkspw_505c5412-6a67-4596-ae6a-bbd51d146126/prometheus-operator/0.log" Jan 23 19:40:06 crc kubenswrapper[4688]: I0123 19:40:06.656668 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h_587391e1-2b8a-40a1-9106-cdda7cb8a2bd/prometheus-operator-admission-webhook/0.log" Jan 23 19:40:06 crc kubenswrapper[4688]: I0123 19:40:06.742430 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f_318d598f-84d5-418c-b820-d7ade7fcc8de/prometheus-operator-admission-webhook/0.log" Jan 23 19:40:06 crc kubenswrapper[4688]: I0123 19:40:06.867089 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-86gvw_8f8e5732-68b1-4f4e-906c-303e1eb20baf/operator/0.log" Jan 23 19:40:06 crc kubenswrapper[4688]: I0123 19:40:06.901019 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pgd8p_9cb38355-91e8-4856-abfa-b307e3f1909b/perses-operator/0.log" Jan 23 19:40:14 crc kubenswrapper[4688]: I0123 19:40:14.356691 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:40:15 crc kubenswrapper[4688]: I0123 19:40:15.301598 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"dc55adeca5cca676edef07e938e9c08ccd8e140ac47ae487d03feb45f7274def"} Jan 23 19:40:25 crc kubenswrapper[4688]: E0123 19:40:25.297502 4688 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.213:44454->38.129.56.213:41963: read tcp 38.129.56.213:44454->38.129.56.213:41963: read: connection reset by peer Jan 23 19:40:25 crc kubenswrapper[4688]: E0123 19:40:25.297536 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.213:44454->38.129.56.213:41963: write tcp 38.129.56.213:44454->38.129.56.213:41963: write: broken pipe Jan 23 19:42:11 crc kubenswrapper[4688]: I0123 19:42:11.439353 4688 generic.go:334] "Generic (PLEG): container finished" podID="6042bb85-ccfd-4a48-a512-7997683d1570" containerID="27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3" exitCode=0 Jan 23 19:42:11 crc kubenswrapper[4688]: I0123 19:42:11.439470 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" event={"ID":"6042bb85-ccfd-4a48-a512-7997683d1570","Type":"ContainerDied","Data":"27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3"} Jan 23 19:42:11 crc kubenswrapper[4688]: I0123 19:42:11.440663 4688 scope.go:117] "RemoveContainer" containerID="27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3" Jan 23 19:42:11 crc kubenswrapper[4688]: I0123 19:42:11.966935 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzrlq_must-gather-jrrhs_6042bb85-ccfd-4a48-a512-7997683d1570/gather/0.log" Jan 23 19:42:21 crc kubenswrapper[4688]: I0123 19:42:21.673062 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vzrlq/must-gather-jrrhs"] Jan 23 19:42:21 crc kubenswrapper[4688]: I0123 19:42:21.673861 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" containerName="copy" containerID="cri-o://6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5" gracePeriod=2 Jan 23 19:42:21 crc kubenswrapper[4688]: I0123 19:42:21.686769 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vzrlq/must-gather-jrrhs"] Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.221371 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzrlq_must-gather-jrrhs_6042bb85-ccfd-4a48-a512-7997683d1570/copy/0.log" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.222466 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.377015 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6042bb85-ccfd-4a48-a512-7997683d1570-must-gather-output\") pod \"6042bb85-ccfd-4a48-a512-7997683d1570\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.377154 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbml4\" (UniqueName: \"kubernetes.io/projected/6042bb85-ccfd-4a48-a512-7997683d1570-kube-api-access-zbml4\") pod \"6042bb85-ccfd-4a48-a512-7997683d1570\" (UID: \"6042bb85-ccfd-4a48-a512-7997683d1570\") " Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.391592 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6042bb85-ccfd-4a48-a512-7997683d1570-kube-api-access-zbml4" (OuterVolumeSpecName: "kube-api-access-zbml4") pod "6042bb85-ccfd-4a48-a512-7997683d1570" (UID: "6042bb85-ccfd-4a48-a512-7997683d1570"). InnerVolumeSpecName "kube-api-access-zbml4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.478566 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbml4\" (UniqueName: \"kubernetes.io/projected/6042bb85-ccfd-4a48-a512-7997683d1570-kube-api-access-zbml4\") on node \"crc\" DevicePath \"\"" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.554617 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vzrlq_must-gather-jrrhs_6042bb85-ccfd-4a48-a512-7997683d1570/copy/0.log" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.555129 4688 generic.go:334] "Generic (PLEG): container finished" podID="6042bb85-ccfd-4a48-a512-7997683d1570" containerID="6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5" exitCode=143 Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.555253 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vzrlq/must-gather-jrrhs" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.555283 4688 scope.go:117] "RemoveContainer" containerID="6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.576254 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6042bb85-ccfd-4a48-a512-7997683d1570-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6042bb85-ccfd-4a48-a512-7997683d1570" (UID: "6042bb85-ccfd-4a48-a512-7997683d1570"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.580631 4688 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6042bb85-ccfd-4a48-a512-7997683d1570-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.583753 4688 scope.go:117] "RemoveContainer" containerID="27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.665817 4688 scope.go:117] "RemoveContainer" containerID="6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5" Jan 23 19:42:22 crc kubenswrapper[4688]: E0123 19:42:22.666394 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5\": container with ID starting with 6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5 not found: ID does not exist" containerID="6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.666439 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5"} err="failed to get container status \"6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5\": rpc error: code = NotFound desc = could not find container \"6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5\": container with ID starting with 6f38e19d7f5df28a9e732913081fc2bbbcede3e64fb258d4d568e3c4bf311bd5 not found: ID does not exist" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.666474 4688 scope.go:117] "RemoveContainer" containerID="27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3" Jan 23 19:42:22 crc kubenswrapper[4688]: E0123 19:42:22.666917 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3\": container with ID starting with 27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3 not found: ID does not exist" containerID="27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3" Jan 23 19:42:22 crc kubenswrapper[4688]: I0123 19:42:22.666949 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3"} err="failed to get container status \"27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3\": rpc error: code = NotFound desc = could not find container \"27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3\": container with ID starting with 27a9d7b8676517eb69d91a3d2ff33f6567c3b478d9b30ce04fab7a0314eb9cd3 not found: ID does not exist" Jan 23 19:42:23 crc kubenswrapper[4688]: I0123 19:42:23.368440 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" path="/var/lib/kubelet/pods/6042bb85-ccfd-4a48-a512-7997683d1570/volumes" Jan 23 19:42:36 crc kubenswrapper[4688]: I0123 19:42:36.965437 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:42:36 crc kubenswrapper[4688]: I0123 19:42:36.966060 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:42:59 crc kubenswrapper[4688]: I0123 19:42:59.519899 4688 scope.go:117] "RemoveContainer" containerID="990fd763b07327adbe400203e6fd5a7982f195315b064a73b0a390194a00a886" Jan 23 19:43:06 crc kubenswrapper[4688]: I0123 19:43:06.965499 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:43:06 crc kubenswrapper[4688]: I0123 19:43:06.966115 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:43:36 crc kubenswrapper[4688]: I0123 19:43:36.965907 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:43:36 crc kubenswrapper[4688]: I0123 19:43:36.966555 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:43:36 crc kubenswrapper[4688]: I0123 19:43:36.966619 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:43:36 crc kubenswrapper[4688]: I0123 19:43:36.967624 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc55adeca5cca676edef07e938e9c08ccd8e140ac47ae487d03feb45f7274def"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:43:36 crc kubenswrapper[4688]: I0123 19:43:36.967688 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://dc55adeca5cca676edef07e938e9c08ccd8e140ac47ae487d03feb45f7274def" gracePeriod=600 Jan 23 19:43:37 crc kubenswrapper[4688]: I0123 19:43:37.229600 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="dc55adeca5cca676edef07e938e9c08ccd8e140ac47ae487d03feb45f7274def" exitCode=0 Jan 23 19:43:37 crc kubenswrapper[4688]: I0123 19:43:37.229815 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"dc55adeca5cca676edef07e938e9c08ccd8e140ac47ae487d03feb45f7274def"} Jan 23 19:43:37 crc kubenswrapper[4688]: I0123 19:43:37.229849 4688 scope.go:117] "RemoveContainer" containerID="462c384cd712434daa13664e3d663c24235f20f9861f2f2498665ca075be6180" Jan 23 19:43:38 crc kubenswrapper[4688]: I0123 19:43:38.245346 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a"} Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.152776 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6"] Jan 23 19:45:00 crc kubenswrapper[4688]: E0123 19:45:00.154040 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" containerName="gather" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.154067 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" containerName="gather" Jan 23 19:45:00 crc kubenswrapper[4688]: E0123 19:45:00.154126 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" containerName="copy" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.154140 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" containerName="copy" Jan 23 19:45:00 crc kubenswrapper[4688]: E0123 19:45:00.154174 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e4444f-4e98-4a92-b305-2c038280ddc4" containerName="container-00" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.154224 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e4444f-4e98-4a92-b305-2c038280ddc4" containerName="container-00" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.154679 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" containerName="copy" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.154724 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="6042bb85-ccfd-4a48-a512-7997683d1570" containerName="gather" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.154750 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e4444f-4e98-4a92-b305-2c038280ddc4" containerName="container-00" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.156046 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.158674 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.158981 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.169510 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6"] Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.195866 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-secret-volume\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.195978 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-config-volume\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.196008 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnj2d\" (UniqueName: \"kubernetes.io/projected/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-kube-api-access-nnj2d\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.297956 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-secret-volume\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.298467 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-config-volume\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.298580 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnj2d\" (UniqueName: \"kubernetes.io/projected/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-kube-api-access-nnj2d\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.299533 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-config-volume\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.303633 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-secret-volume\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.324549 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnj2d\" (UniqueName: \"kubernetes.io/projected/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-kube-api-access-nnj2d\") pod \"collect-profiles-29486625-zskn6\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.518352 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:00 crc kubenswrapper[4688]: I0123 19:45:00.977018 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6"] Jan 23 19:45:01 crc kubenswrapper[4688]: I0123 19:45:01.052922 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" event={"ID":"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85","Type":"ContainerStarted","Data":"dd16e91538f4ac65d0b5b41db901046c4d93a159c3ae49f70fb41bebdbab5774"} Jan 23 19:45:02 crc kubenswrapper[4688]: I0123 19:45:02.066344 4688 generic.go:334] "Generic (PLEG): container finished" podID="9485ebb5-f1a1-4c2a-a870-8590ce4e8c85" containerID="6457ea4abec291073a1594edc0d814ba924d58df1e413bf6f377e5952b688289" exitCode=0 Jan 23 19:45:02 crc kubenswrapper[4688]: I0123 19:45:02.066431 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" event={"ID":"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85","Type":"ContainerDied","Data":"6457ea4abec291073a1594edc0d814ba924d58df1e413bf6f377e5952b688289"} Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.377042 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.476036 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-config-volume\") pod \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.476475 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-secret-volume\") pod \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.476756 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnj2d\" (UniqueName: \"kubernetes.io/projected/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-kube-api-access-nnj2d\") pod \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\" (UID: \"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85\") " Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.476855 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-config-volume" (OuterVolumeSpecName: "config-volume") pod "9485ebb5-f1a1-4c2a-a870-8590ce4e8c85" (UID: "9485ebb5-f1a1-4c2a-a870-8590ce4e8c85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.477448 4688 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.483045 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9485ebb5-f1a1-4c2a-a870-8590ce4e8c85" (UID: "9485ebb5-f1a1-4c2a-a870-8590ce4e8c85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.484738 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-kube-api-access-nnj2d" (OuterVolumeSpecName: "kube-api-access-nnj2d") pod "9485ebb5-f1a1-4c2a-a870-8590ce4e8c85" (UID: "9485ebb5-f1a1-4c2a-a870-8590ce4e8c85"). InnerVolumeSpecName "kube-api-access-nnj2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.579882 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnj2d\" (UniqueName: \"kubernetes.io/projected/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-kube-api-access-nnj2d\") on node \"crc\" DevicePath \"\"" Jan 23 19:45:03 crc kubenswrapper[4688]: I0123 19:45:03.579943 4688 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9485ebb5-f1a1-4c2a-a870-8590ce4e8c85-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 19:45:04 crc kubenswrapper[4688]: I0123 19:45:04.094292 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" event={"ID":"9485ebb5-f1a1-4c2a-a870-8590ce4e8c85","Type":"ContainerDied","Data":"dd16e91538f4ac65d0b5b41db901046c4d93a159c3ae49f70fb41bebdbab5774"} Jan 23 19:45:04 crc kubenswrapper[4688]: I0123 19:45:04.094617 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd16e91538f4ac65d0b5b41db901046c4d93a159c3ae49f70fb41bebdbab5774" Jan 23 19:45:04 crc kubenswrapper[4688]: I0123 19:45:04.094513 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486625-zskn6" Jan 23 19:45:04 crc kubenswrapper[4688]: I0123 19:45:04.462389 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf"] Jan 23 19:45:04 crc kubenswrapper[4688]: I0123 19:45:04.471512 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486580-c6ksf"] Jan 23 19:45:05 crc kubenswrapper[4688]: I0123 19:45:05.367619 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace5702f-36da-49f7-8a3e-536784bf7b2a" path="/var/lib/kubelet/pods/ace5702f-36da-49f7-8a3e-536784bf7b2a/volumes" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.323787 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7fxjq/must-gather-fpltz"] Jan 23 19:45:43 crc kubenswrapper[4688]: E0123 19:45:43.324749 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9485ebb5-f1a1-4c2a-a870-8590ce4e8c85" containerName="collect-profiles" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.324765 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="9485ebb5-f1a1-4c2a-a870-8590ce4e8c85" containerName="collect-profiles" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.324984 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="9485ebb5-f1a1-4c2a-a870-8590ce4e8c85" containerName="collect-profiles" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.326227 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.337923 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7fxjq"/"openshift-service-ca.crt" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.338028 4688 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7fxjq"/"default-dockercfg-ctq7z" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.338439 4688 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7fxjq"/"kube-root-ca.crt" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.350535 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7fxjq/must-gather-fpltz"] Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.449169 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75ds\" (UniqueName: \"kubernetes.io/projected/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-kube-api-access-v75ds\") pod \"must-gather-fpltz\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.449650 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-must-gather-output\") pod \"must-gather-fpltz\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.551853 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v75ds\" (UniqueName: \"kubernetes.io/projected/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-kube-api-access-v75ds\") pod \"must-gather-fpltz\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.551975 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-must-gather-output\") pod \"must-gather-fpltz\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.552541 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-must-gather-output\") pod \"must-gather-fpltz\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.573690 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v75ds\" (UniqueName: \"kubernetes.io/projected/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-kube-api-access-v75ds\") pod \"must-gather-fpltz\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:43 crc kubenswrapper[4688]: I0123 19:45:43.661382 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.138016 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7fxjq/must-gather-fpltz"] Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.560281 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/must-gather-fpltz" event={"ID":"e60d2422-4bc0-4b1e-9659-0981cbe14bcc","Type":"ContainerStarted","Data":"bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc"} Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.560621 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/must-gather-fpltz" event={"ID":"e60d2422-4bc0-4b1e-9659-0981cbe14bcc","Type":"ContainerStarted","Data":"74d0030cadd7c97dcd8a5f69ebd1df8b1670fccb1c8e254c187ad48744d6f2a1"} Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.774611 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-859n5"] Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.777176 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.794238 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-859n5"] Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.877924 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-catalog-content\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.878007 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx594\" (UniqueName: \"kubernetes.io/projected/5dfb3f29-8e32-42fc-8325-dc3fc8867813-kube-api-access-fx594\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.878165 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-utilities\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.980650 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-catalog-content\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.980770 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx594\" (UniqueName: \"kubernetes.io/projected/5dfb3f29-8e32-42fc-8325-dc3fc8867813-kube-api-access-fx594\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.980921 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-utilities\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.981125 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-catalog-content\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:44 crc kubenswrapper[4688]: I0123 19:45:44.981450 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-utilities\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:45 crc kubenswrapper[4688]: I0123 19:45:45.002242 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx594\" (UniqueName: \"kubernetes.io/projected/5dfb3f29-8e32-42fc-8325-dc3fc8867813-kube-api-access-fx594\") pod \"redhat-operators-859n5\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:45 crc kubenswrapper[4688]: I0123 19:45:45.109934 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:45 crc kubenswrapper[4688]: I0123 19:45:45.570835 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/must-gather-fpltz" event={"ID":"e60d2422-4bc0-4b1e-9659-0981cbe14bcc","Type":"ContainerStarted","Data":"9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d"} Jan 23 19:45:45 crc kubenswrapper[4688]: I0123 19:45:45.594580 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7fxjq/must-gather-fpltz" podStartSLOduration=2.594555987 podStartE2EDuration="2.594555987s" podCreationTimestamp="2026-01-23 19:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:45:45.586729892 +0000 UTC m=+5940.582554353" watchObservedRunningTime="2026-01-23 19:45:45.594555987 +0000 UTC m=+5940.590380428" Jan 23 19:45:45 crc kubenswrapper[4688]: I0123 19:45:45.614726 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-859n5"] Jan 23 19:45:46 crc kubenswrapper[4688]: I0123 19:45:46.581947 4688 generic.go:334] "Generic (PLEG): container finished" podID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerID="f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6" exitCode=0 Jan 23 19:45:46 crc kubenswrapper[4688]: I0123 19:45:46.582059 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859n5" event={"ID":"5dfb3f29-8e32-42fc-8325-dc3fc8867813","Type":"ContainerDied","Data":"f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6"} Jan 23 19:45:46 crc kubenswrapper[4688]: I0123 19:45:46.582340 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859n5" event={"ID":"5dfb3f29-8e32-42fc-8325-dc3fc8867813","Type":"ContainerStarted","Data":"191b5da9b0025a3485aab406feec510dec3deed1db3c67b9a867d64f62ac5ba8"} Jan 23 19:45:46 crc kubenswrapper[4688]: I0123 19:45:46.584224 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:45:47 crc kubenswrapper[4688]: I0123 19:45:47.601607 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859n5" event={"ID":"5dfb3f29-8e32-42fc-8325-dc3fc8867813","Type":"ContainerStarted","Data":"782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba"} Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.371379 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-drlnb"] Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.372725 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.460886 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2afef1d-3b6f-4bad-a405-05bd599bc768-host\") pod \"crc-debug-drlnb\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.461052 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwkv\" (UniqueName: \"kubernetes.io/projected/c2afef1d-3b6f-4bad-a405-05bd599bc768-kube-api-access-6xwkv\") pod \"crc-debug-drlnb\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.563088 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwkv\" (UniqueName: \"kubernetes.io/projected/c2afef1d-3b6f-4bad-a405-05bd599bc768-kube-api-access-6xwkv\") pod \"crc-debug-drlnb\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.563241 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2afef1d-3b6f-4bad-a405-05bd599bc768-host\") pod \"crc-debug-drlnb\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.563371 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2afef1d-3b6f-4bad-a405-05bd599bc768-host\") pod \"crc-debug-drlnb\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.589110 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwkv\" (UniqueName: \"kubernetes.io/projected/c2afef1d-3b6f-4bad-a405-05bd599bc768-kube-api-access-6xwkv\") pod \"crc-debug-drlnb\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: I0123 19:45:48.699279 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:45:48 crc kubenswrapper[4688]: W0123 19:45:48.731729 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2afef1d_3b6f_4bad_a405_05bd599bc768.slice/crio-4d488678ebb8cb7c2ed25de61c090d79ac223c7fcd6f213bf1ff865db2e6de77 WatchSource:0}: Error finding container 4d488678ebb8cb7c2ed25de61c090d79ac223c7fcd6f213bf1ff865db2e6de77: Status 404 returned error can't find the container with id 4d488678ebb8cb7c2ed25de61c090d79ac223c7fcd6f213bf1ff865db2e6de77 Jan 23 19:45:49 crc kubenswrapper[4688]: I0123 19:45:49.621741 4688 generic.go:334] "Generic (PLEG): container finished" podID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerID="782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba" exitCode=0 Jan 23 19:45:49 crc kubenswrapper[4688]: I0123 19:45:49.621846 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859n5" event={"ID":"5dfb3f29-8e32-42fc-8325-dc3fc8867813","Type":"ContainerDied","Data":"782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba"} Jan 23 19:45:49 crc kubenswrapper[4688]: I0123 19:45:49.624425 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" event={"ID":"c2afef1d-3b6f-4bad-a405-05bd599bc768","Type":"ContainerStarted","Data":"4725e80ed635df0ba7c9135a6aae6d63009ef27e195a0e48bb4d823f1ae24972"} Jan 23 19:45:49 crc kubenswrapper[4688]: I0123 19:45:49.624471 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" event={"ID":"c2afef1d-3b6f-4bad-a405-05bd599bc768","Type":"ContainerStarted","Data":"4d488678ebb8cb7c2ed25de61c090d79ac223c7fcd6f213bf1ff865db2e6de77"} Jan 23 19:45:49 crc kubenswrapper[4688]: I0123 19:45:49.673390 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" podStartSLOduration=1.673362652 podStartE2EDuration="1.673362652s" podCreationTimestamp="2026-01-23 19:45:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:45:49.668211124 +0000 UTC m=+5944.664035575" watchObservedRunningTime="2026-01-23 19:45:49.673362652 +0000 UTC m=+5944.669187093" Jan 23 19:45:50 crc kubenswrapper[4688]: I0123 19:45:50.639105 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859n5" event={"ID":"5dfb3f29-8e32-42fc-8325-dc3fc8867813","Type":"ContainerStarted","Data":"732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3"} Jan 23 19:45:50 crc kubenswrapper[4688]: I0123 19:45:50.663498 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-859n5" podStartSLOduration=3.155229785 podStartE2EDuration="6.663477724s" podCreationTimestamp="2026-01-23 19:45:44 +0000 UTC" firstStartedPulling="2026-01-23 19:45:46.583909408 +0000 UTC m=+5941.579733859" lastFinishedPulling="2026-01-23 19:45:50.092157357 +0000 UTC m=+5945.087981798" observedRunningTime="2026-01-23 19:45:50.657813822 +0000 UTC m=+5945.653638283" watchObservedRunningTime="2026-01-23 19:45:50.663477724 +0000 UTC m=+5945.659302165" Jan 23 19:45:55 crc kubenswrapper[4688]: I0123 19:45:55.110756 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:55 crc kubenswrapper[4688]: I0123 19:45:55.112559 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:45:56 crc kubenswrapper[4688]: I0123 19:45:56.168742 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-859n5" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="registry-server" probeResult="failure" output=< Jan 23 19:45:56 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:45:56 crc kubenswrapper[4688]: > Jan 23 19:45:59 crc kubenswrapper[4688]: I0123 19:45:59.626676 4688 scope.go:117] "RemoveContainer" containerID="d1e55df49a8662a17de4793a57d75ed262469a99d6d79a1bc408a56f8b5742f4" Jan 23 19:46:05 crc kubenswrapper[4688]: I0123 19:46:05.166933 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:46:05 crc kubenswrapper[4688]: I0123 19:46:05.220517 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:46:05 crc kubenswrapper[4688]: I0123 19:46:05.408042 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-859n5"] Jan 23 19:46:06 crc kubenswrapper[4688]: I0123 19:46:06.820176 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-859n5" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="registry-server" containerID="cri-o://732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3" gracePeriod=2 Jan 23 19:46:06 crc kubenswrapper[4688]: I0123 19:46:06.964985 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:46:06 crc kubenswrapper[4688]: I0123 19:46:06.965309 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.383177 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.473781 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-utilities\") pod \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.473875 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-catalog-content\") pod \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.473919 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx594\" (UniqueName: \"kubernetes.io/projected/5dfb3f29-8e32-42fc-8325-dc3fc8867813-kube-api-access-fx594\") pod \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\" (UID: \"5dfb3f29-8e32-42fc-8325-dc3fc8867813\") " Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.474664 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-utilities" (OuterVolumeSpecName: "utilities") pod "5dfb3f29-8e32-42fc-8325-dc3fc8867813" (UID: "5dfb3f29-8e32-42fc-8325-dc3fc8867813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.488531 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfb3f29-8e32-42fc-8325-dc3fc8867813-kube-api-access-fx594" (OuterVolumeSpecName: "kube-api-access-fx594") pod "5dfb3f29-8e32-42fc-8325-dc3fc8867813" (UID: "5dfb3f29-8e32-42fc-8325-dc3fc8867813"). InnerVolumeSpecName "kube-api-access-fx594". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.576613 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.576680 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx594\" (UniqueName: \"kubernetes.io/projected/5dfb3f29-8e32-42fc-8325-dc3fc8867813-kube-api-access-fx594\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.662355 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dfb3f29-8e32-42fc-8325-dc3fc8867813" (UID: "5dfb3f29-8e32-42fc-8325-dc3fc8867813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.679553 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dfb3f29-8e32-42fc-8325-dc3fc8867813-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.840554 4688 generic.go:334] "Generic (PLEG): container finished" podID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerID="732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3" exitCode=0 Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.840599 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859n5" event={"ID":"5dfb3f29-8e32-42fc-8325-dc3fc8867813","Type":"ContainerDied","Data":"732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3"} Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.840626 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-859n5" event={"ID":"5dfb3f29-8e32-42fc-8325-dc3fc8867813","Type":"ContainerDied","Data":"191b5da9b0025a3485aab406feec510dec3deed1db3c67b9a867d64f62ac5ba8"} Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.840642 4688 scope.go:117] "RemoveContainer" containerID="732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.840844 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-859n5" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.865642 4688 scope.go:117] "RemoveContainer" containerID="782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.887218 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-859n5"] Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.899223 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-859n5"] Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.901398 4688 scope.go:117] "RemoveContainer" containerID="f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.947869 4688 scope.go:117] "RemoveContainer" containerID="732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3" Jan 23 19:46:07 crc kubenswrapper[4688]: E0123 19:46:07.948415 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3\": container with ID starting with 732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3 not found: ID does not exist" containerID="732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.948519 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3"} err="failed to get container status \"732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3\": rpc error: code = NotFound desc = could not find container \"732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3\": container with ID starting with 732620824b42be85af84a57f421ebc7cfecd96c9fc9a1483a1f59e34339394d3 not found: ID does not exist" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.948606 4688 scope.go:117] "RemoveContainer" containerID="782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba" Jan 23 19:46:07 crc kubenswrapper[4688]: E0123 19:46:07.948902 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba\": container with ID starting with 782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba not found: ID does not exist" containerID="782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.948986 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba"} err="failed to get container status \"782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba\": rpc error: code = NotFound desc = could not find container \"782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba\": container with ID starting with 782847d6cf3269a90581fdbb7ced1045fdef8e65bc31362891af7a822dff5bba not found: ID does not exist" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.949051 4688 scope.go:117] "RemoveContainer" containerID="f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6" Jan 23 19:46:07 crc kubenswrapper[4688]: E0123 19:46:07.949392 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6\": container with ID starting with f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6 not found: ID does not exist" containerID="f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6" Jan 23 19:46:07 crc kubenswrapper[4688]: I0123 19:46:07.949482 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6"} err="failed to get container status \"f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6\": rpc error: code = NotFound desc = could not find container \"f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6\": container with ID starting with f444be62b50f218a9e1d7557390febac32e21ef7369aba3a86c1e5899d9529f6 not found: ID does not exist" Jan 23 19:46:09 crc kubenswrapper[4688]: I0123 19:46:09.369525 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" path="/var/lib/kubelet/pods/5dfb3f29-8e32-42fc-8325-dc3fc8867813/volumes" Jan 23 19:46:33 crc kubenswrapper[4688]: I0123 19:46:33.063235 4688 generic.go:334] "Generic (PLEG): container finished" podID="c2afef1d-3b6f-4bad-a405-05bd599bc768" containerID="4725e80ed635df0ba7c9135a6aae6d63009ef27e195a0e48bb4d823f1ae24972" exitCode=0 Jan 23 19:46:33 crc kubenswrapper[4688]: I0123 19:46:33.063309 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" event={"ID":"c2afef1d-3b6f-4bad-a405-05bd599bc768","Type":"ContainerDied","Data":"4725e80ed635df0ba7c9135a6aae6d63009ef27e195a0e48bb4d823f1ae24972"} Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.217499 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.254314 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-drlnb"] Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.265227 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-drlnb"] Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.367482 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xwkv\" (UniqueName: \"kubernetes.io/projected/c2afef1d-3b6f-4bad-a405-05bd599bc768-kube-api-access-6xwkv\") pod \"c2afef1d-3b6f-4bad-a405-05bd599bc768\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.367573 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2afef1d-3b6f-4bad-a405-05bd599bc768-host\") pod \"c2afef1d-3b6f-4bad-a405-05bd599bc768\" (UID: \"c2afef1d-3b6f-4bad-a405-05bd599bc768\") " Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.368085 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2afef1d-3b6f-4bad-a405-05bd599bc768-host" (OuterVolumeSpecName: "host") pod "c2afef1d-3b6f-4bad-a405-05bd599bc768" (UID: "c2afef1d-3b6f-4bad-a405-05bd599bc768"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.372984 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2afef1d-3b6f-4bad-a405-05bd599bc768-kube-api-access-6xwkv" (OuterVolumeSpecName: "kube-api-access-6xwkv") pod "c2afef1d-3b6f-4bad-a405-05bd599bc768" (UID: "c2afef1d-3b6f-4bad-a405-05bd599bc768"). InnerVolumeSpecName "kube-api-access-6xwkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.469855 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xwkv\" (UniqueName: \"kubernetes.io/projected/c2afef1d-3b6f-4bad-a405-05bd599bc768-kube-api-access-6xwkv\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:34 crc kubenswrapper[4688]: I0123 19:46:34.469903 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c2afef1d-3b6f-4bad-a405-05bd599bc768-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.088129 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d488678ebb8cb7c2ed25de61c090d79ac223c7fcd6f213bf1ff865db2e6de77" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.088170 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-drlnb" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.367986 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2afef1d-3b6f-4bad-a405-05bd599bc768" path="/var/lib/kubelet/pods/c2afef1d-3b6f-4bad-a405-05bd599bc768/volumes" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.438312 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-ppg7c"] Jan 23 19:46:35 crc kubenswrapper[4688]: E0123 19:46:35.438842 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2afef1d-3b6f-4bad-a405-05bd599bc768" containerName="container-00" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.438865 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2afef1d-3b6f-4bad-a405-05bd599bc768" containerName="container-00" Jan 23 19:46:35 crc kubenswrapper[4688]: E0123 19:46:35.438914 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="extract-utilities" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.438925 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="extract-utilities" Jan 23 19:46:35 crc kubenswrapper[4688]: E0123 19:46:35.438946 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="registry-server" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.438956 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="registry-server" Jan 23 19:46:35 crc kubenswrapper[4688]: E0123 19:46:35.438973 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="extract-content" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.438979 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="extract-content" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.439306 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2afef1d-3b6f-4bad-a405-05bd599bc768" containerName="container-00" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.439326 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfb3f29-8e32-42fc-8325-dc3fc8867813" containerName="registry-server" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.440065 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.592728 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf94d9-66ad-466c-b53f-52136ca983f7-host\") pod \"crc-debug-ppg7c\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.592918 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhhwl\" (UniqueName: \"kubernetes.io/projected/a7cf94d9-66ad-466c-b53f-52136ca983f7-kube-api-access-rhhwl\") pod \"crc-debug-ppg7c\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.695129 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf94d9-66ad-466c-b53f-52136ca983f7-host\") pod \"crc-debug-ppg7c\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.695352 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhhwl\" (UniqueName: \"kubernetes.io/projected/a7cf94d9-66ad-466c-b53f-52136ca983f7-kube-api-access-rhhwl\") pod \"crc-debug-ppg7c\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.695872 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf94d9-66ad-466c-b53f-52136ca983f7-host\") pod \"crc-debug-ppg7c\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:35 crc kubenswrapper[4688]: I0123 19:46:35.773053 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhhwl\" (UniqueName: \"kubernetes.io/projected/a7cf94d9-66ad-466c-b53f-52136ca983f7-kube-api-access-rhhwl\") pod \"crc-debug-ppg7c\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:36 crc kubenswrapper[4688]: I0123 19:46:36.060683 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:36 crc kubenswrapper[4688]: I0123 19:46:36.964911 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:46:36 crc kubenswrapper[4688]: I0123 19:46:36.965271 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:46:37 crc kubenswrapper[4688]: I0123 19:46:37.107277 4688 generic.go:334] "Generic (PLEG): container finished" podID="a7cf94d9-66ad-466c-b53f-52136ca983f7" containerID="3b9426e89f1d9a3595c08033d624359ccc777644fd194322b1c1e8346af501c7" exitCode=0 Jan 23 19:46:37 crc kubenswrapper[4688]: I0123 19:46:37.107320 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" event={"ID":"a7cf94d9-66ad-466c-b53f-52136ca983f7","Type":"ContainerDied","Data":"3b9426e89f1d9a3595c08033d624359ccc777644fd194322b1c1e8346af501c7"} Jan 23 19:46:37 crc kubenswrapper[4688]: I0123 19:46:37.107346 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" event={"ID":"a7cf94d9-66ad-466c-b53f-52136ca983f7","Type":"ContainerStarted","Data":"4db4f7d3b4601ed6124f2621b014b366e305e9297f40e0d5774471dafbdd837b"} Jan 23 19:46:38 crc kubenswrapper[4688]: I0123 19:46:38.262980 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:38 crc kubenswrapper[4688]: I0123 19:46:38.342784 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf94d9-66ad-466c-b53f-52136ca983f7-host\") pod \"a7cf94d9-66ad-466c-b53f-52136ca983f7\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " Jan 23 19:46:38 crc kubenswrapper[4688]: I0123 19:46:38.342854 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7cf94d9-66ad-466c-b53f-52136ca983f7-host" (OuterVolumeSpecName: "host") pod "a7cf94d9-66ad-466c-b53f-52136ca983f7" (UID: "a7cf94d9-66ad-466c-b53f-52136ca983f7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:46:38 crc kubenswrapper[4688]: I0123 19:46:38.342881 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhhwl\" (UniqueName: \"kubernetes.io/projected/a7cf94d9-66ad-466c-b53f-52136ca983f7-kube-api-access-rhhwl\") pod \"a7cf94d9-66ad-466c-b53f-52136ca983f7\" (UID: \"a7cf94d9-66ad-466c-b53f-52136ca983f7\") " Jan 23 19:46:38 crc kubenswrapper[4688]: I0123 19:46:38.343518 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7cf94d9-66ad-466c-b53f-52136ca983f7-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:38 crc kubenswrapper[4688]: I0123 19:46:38.356401 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7cf94d9-66ad-466c-b53f-52136ca983f7-kube-api-access-rhhwl" (OuterVolumeSpecName: "kube-api-access-rhhwl") pod "a7cf94d9-66ad-466c-b53f-52136ca983f7" (UID: "a7cf94d9-66ad-466c-b53f-52136ca983f7"). InnerVolumeSpecName "kube-api-access-rhhwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:46:38 crc kubenswrapper[4688]: I0123 19:46:38.445448 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhhwl\" (UniqueName: \"kubernetes.io/projected/a7cf94d9-66ad-466c-b53f-52136ca983f7-kube-api-access-rhhwl\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:39 crc kubenswrapper[4688]: I0123 19:46:39.130623 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" event={"ID":"a7cf94d9-66ad-466c-b53f-52136ca983f7","Type":"ContainerDied","Data":"4db4f7d3b4601ed6124f2621b014b366e305e9297f40e0d5774471dafbdd837b"} Jan 23 19:46:39 crc kubenswrapper[4688]: I0123 19:46:39.130667 4688 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db4f7d3b4601ed6124f2621b014b366e305e9297f40e0d5774471dafbdd837b" Jan 23 19:46:39 crc kubenswrapper[4688]: I0123 19:46:39.130692 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppg7c" Jan 23 19:46:39 crc kubenswrapper[4688]: I0123 19:46:39.222129 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-ppg7c"] Jan 23 19:46:39 crc kubenswrapper[4688]: I0123 19:46:39.231730 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-ppg7c"] Jan 23 19:46:39 crc kubenswrapper[4688]: I0123 19:46:39.368437 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7cf94d9-66ad-466c-b53f-52136ca983f7" path="/var/lib/kubelet/pods/a7cf94d9-66ad-466c-b53f-52136ca983f7/volumes" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.408284 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-ppglz"] Jan 23 19:46:40 crc kubenswrapper[4688]: E0123 19:46:40.408923 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7cf94d9-66ad-466c-b53f-52136ca983f7" containerName="container-00" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.408936 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7cf94d9-66ad-466c-b53f-52136ca983f7" containerName="container-00" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.409151 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7cf94d9-66ad-466c-b53f-52136ca983f7" containerName="container-00" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.409878 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.484609 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e64e27a-601b-41f1-bd1c-62563298553a-host\") pod \"crc-debug-ppglz\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.484969 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4656\" (UniqueName: \"kubernetes.io/projected/5e64e27a-601b-41f1-bd1c-62563298553a-kube-api-access-n4656\") pod \"crc-debug-ppglz\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.586991 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e64e27a-601b-41f1-bd1c-62563298553a-host\") pod \"crc-debug-ppglz\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.587094 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4656\" (UniqueName: \"kubernetes.io/projected/5e64e27a-601b-41f1-bd1c-62563298553a-kube-api-access-n4656\") pod \"crc-debug-ppglz\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.587417 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e64e27a-601b-41f1-bd1c-62563298553a-host\") pod \"crc-debug-ppglz\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.606916 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4656\" (UniqueName: \"kubernetes.io/projected/5e64e27a-601b-41f1-bd1c-62563298553a-kube-api-access-n4656\") pod \"crc-debug-ppglz\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: I0123 19:46:40.726114 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:40 crc kubenswrapper[4688]: W0123 19:46:40.791491 4688 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e64e27a_601b_41f1_bd1c_62563298553a.slice/crio-98b674971f5a6c9694fb32c4552f0ef2e944bf5ee4b0b9ba1757d923a34af363 WatchSource:0}: Error finding container 98b674971f5a6c9694fb32c4552f0ef2e944bf5ee4b0b9ba1757d923a34af363: Status 404 returned error can't find the container with id 98b674971f5a6c9694fb32c4552f0ef2e944bf5ee4b0b9ba1757d923a34af363 Jan 23 19:46:41 crc kubenswrapper[4688]: I0123 19:46:41.150746 4688 generic.go:334] "Generic (PLEG): container finished" podID="5e64e27a-601b-41f1-bd1c-62563298553a" containerID="f83f20976aac3079e836ea4821d7d1b8e298b8bc592b44c5a4aa8084a1979981" exitCode=0 Jan 23 19:46:41 crc kubenswrapper[4688]: I0123 19:46:41.150845 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-ppglz" event={"ID":"5e64e27a-601b-41f1-bd1c-62563298553a","Type":"ContainerDied","Data":"f83f20976aac3079e836ea4821d7d1b8e298b8bc592b44c5a4aa8084a1979981"} Jan 23 19:46:41 crc kubenswrapper[4688]: I0123 19:46:41.151103 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/crc-debug-ppglz" event={"ID":"5e64e27a-601b-41f1-bd1c-62563298553a","Type":"ContainerStarted","Data":"98b674971f5a6c9694fb32c4552f0ef2e944bf5ee4b0b9ba1757d923a34af363"} Jan 23 19:46:41 crc kubenswrapper[4688]: I0123 19:46:41.197831 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-ppglz"] Jan 23 19:46:41 crc kubenswrapper[4688]: I0123 19:46:41.210401 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7fxjq/crc-debug-ppglz"] Jan 23 19:46:42 crc kubenswrapper[4688]: I0123 19:46:42.262534 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:42 crc kubenswrapper[4688]: I0123 19:46:42.322734 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e64e27a-601b-41f1-bd1c-62563298553a-host\") pod \"5e64e27a-601b-41f1-bd1c-62563298553a\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " Jan 23 19:46:42 crc kubenswrapper[4688]: I0123 19:46:42.322929 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e64e27a-601b-41f1-bd1c-62563298553a-host" (OuterVolumeSpecName: "host") pod "5e64e27a-601b-41f1-bd1c-62563298553a" (UID: "5e64e27a-601b-41f1-bd1c-62563298553a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 19:46:42 crc kubenswrapper[4688]: I0123 19:46:42.323850 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4656\" (UniqueName: \"kubernetes.io/projected/5e64e27a-601b-41f1-bd1c-62563298553a-kube-api-access-n4656\") pod \"5e64e27a-601b-41f1-bd1c-62563298553a\" (UID: \"5e64e27a-601b-41f1-bd1c-62563298553a\") " Jan 23 19:46:42 crc kubenswrapper[4688]: I0123 19:46:42.324847 4688 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e64e27a-601b-41f1-bd1c-62563298553a-host\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:42 crc kubenswrapper[4688]: I0123 19:46:42.331316 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e64e27a-601b-41f1-bd1c-62563298553a-kube-api-access-n4656" (OuterVolumeSpecName: "kube-api-access-n4656") pod "5e64e27a-601b-41f1-bd1c-62563298553a" (UID: "5e64e27a-601b-41f1-bd1c-62563298553a"). InnerVolumeSpecName "kube-api-access-n4656". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:46:42 crc kubenswrapper[4688]: I0123 19:46:42.427759 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4656\" (UniqueName: \"kubernetes.io/projected/5e64e27a-601b-41f1-bd1c-62563298553a-kube-api-access-n4656\") on node \"crc\" DevicePath \"\"" Jan 23 19:46:43 crc kubenswrapper[4688]: I0123 19:46:43.171147 4688 scope.go:117] "RemoveContainer" containerID="f83f20976aac3079e836ea4821d7d1b8e298b8bc592b44c5a4aa8084a1979981" Jan 23 19:46:43 crc kubenswrapper[4688]: I0123 19:46:43.171211 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/crc-debug-ppglz" Jan 23 19:46:43 crc kubenswrapper[4688]: I0123 19:46:43.367649 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e64e27a-601b-41f1-bd1c-62563298553a" path="/var/lib/kubelet/pods/5e64e27a-601b-41f1-bd1c-62563298553a/volumes" Jan 23 19:47:06 crc kubenswrapper[4688]: I0123 19:47:06.965113 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:47:06 crc kubenswrapper[4688]: I0123 19:47:06.965726 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:47:06 crc kubenswrapper[4688]: I0123 19:47:06.965781 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:47:06 crc kubenswrapper[4688]: I0123 19:47:06.967129 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:47:06 crc kubenswrapper[4688]: I0123 19:47:06.967220 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" gracePeriod=600 Jan 23 19:47:07 crc kubenswrapper[4688]: E0123 19:47:07.094982 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:47:07 crc kubenswrapper[4688]: I0123 19:47:07.425922 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" exitCode=0 Jan 23 19:47:07 crc kubenswrapper[4688]: I0123 19:47:07.425965 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a"} Jan 23 19:47:07 crc kubenswrapper[4688]: I0123 19:47:07.425999 4688 scope.go:117] "RemoveContainer" containerID="dc55adeca5cca676edef07e938e9c08ccd8e140ac47ae487d03feb45f7274def" Jan 23 19:47:07 crc kubenswrapper[4688]: I0123 19:47:07.426753 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:47:07 crc kubenswrapper[4688]: E0123 19:47:07.427148 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.120221 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c8cqd"] Jan 23 19:47:21 crc kubenswrapper[4688]: E0123 19:47:21.121270 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e64e27a-601b-41f1-bd1c-62563298553a" containerName="container-00" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.121288 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e64e27a-601b-41f1-bd1c-62563298553a" containerName="container-00" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.121576 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e64e27a-601b-41f1-bd1c-62563298553a" containerName="container-00" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.123328 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.145419 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8cqd"] Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.191933 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-utilities\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.192020 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bqg\" (UniqueName: \"kubernetes.io/projected/a4cb30c2-06c5-449d-a55d-d379f3f09440-kube-api-access-w6bqg\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.192175 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-catalog-content\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.299483 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-utilities\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.299523 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bqg\" (UniqueName: \"kubernetes.io/projected/a4cb30c2-06c5-449d-a55d-d379f3f09440-kube-api-access-w6bqg\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.299589 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-catalog-content\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.300027 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-catalog-content\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.300393 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-utilities\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.335404 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bqg\" (UniqueName: \"kubernetes.io/projected/a4cb30c2-06c5-449d-a55d-d379f3f09440-kube-api-access-w6bqg\") pod \"redhat-marketplace-c8cqd\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:21 crc kubenswrapper[4688]: I0123 19:47:21.450919 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.006378 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8cqd"] Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.090438 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9696bf65d-hqqnw_26d17642-a159-4c56-85da-4ce111096230/barbican-api/0.log" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.329247 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-9696bf65d-hqqnw_26d17642-a159-4c56-85da-4ce111096230/barbican-api-log/0.log" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.356029 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:47:22 crc kubenswrapper[4688]: E0123 19:47:22.356275 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.531693 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775f789f8-94pvr_69811c17-16d3-41e2-b891-6acdfeb480b0/barbican-keystone-listener/0.log" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.581623 4688 generic.go:334] "Generic (PLEG): container finished" podID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerID="ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c" exitCode=0 Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.581671 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8cqd" event={"ID":"a4cb30c2-06c5-449d-a55d-d379f3f09440","Type":"ContainerDied","Data":"ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c"} Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.581697 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8cqd" event={"ID":"a4cb30c2-06c5-449d-a55d-d379f3f09440","Type":"ContainerStarted","Data":"5656fea3cc4decef8c7e063a7e8ae8ef705b1413f71abfc4e4a1aa81f43ae366"} Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.622739 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-775f789f8-94pvr_69811c17-16d3-41e2-b891-6acdfeb480b0/barbican-keystone-listener-log/0.log" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.825639 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57fb8477df-2m7ng_c28c58c6-022f-44fc-878a-92a0ad162488/barbican-worker-log/0.log" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.826255 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-57fb8477df-2m7ng_c28c58c6-022f-44fc-878a-92a0ad162488/barbican-worker/0.log" Jan 23 19:47:22 crc kubenswrapper[4688]: I0123 19:47:22.996318 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-p24f2_fcefed39-8bf9-4782-8262-6616eee522f6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.097439 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/ceilometer-central-agent/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.166851 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/ceilometer-notification-agent/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.244365 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/proxy-httpd/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.356656 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a9fb5995-71ba-46d0-8e43-e5325af334dd/sg-core/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.439462 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d04ebda-89c7-4c9c-9d26-280a6d1598f8/cinder-api/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.493275 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d04ebda-89c7-4c9c-9d26-280a6d1598f8/cinder-api-log/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.593724 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8cqd" event={"ID":"a4cb30c2-06c5-449d-a55d-d379f3f09440","Type":"ContainerStarted","Data":"9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55"} Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.724135 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb86de93-e273-417f-8c60-8b6201635766/cinder-scheduler/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.735500 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb86de93-e273-417f-8c60-8b6201635766/probe/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.891138 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wwtwb_fc079b17-fa36-4e19-aac7-b8c309fa77e1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:23 crc kubenswrapper[4688]: I0123 19:47:23.944894 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c7jv8_45576589-fbbb-4556-9306-de4deba76388/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.134038 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-8gcp9_304eee98-817f-482f-88a4-0390cfa06ffc/init/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.266680 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-8gcp9_304eee98-817f-482f-88a4-0390cfa06ffc/init/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.369651 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rdr6l_0db8a4c7-1a83-44a3-a9b9-73868a2fe73e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.443797 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-f4d4c4b7-8gcp9_304eee98-817f-482f-88a4-0390cfa06ffc/dnsmasq-dns/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.581224 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d00dfb95-d6b9-42c5-bd68-91cba08b97b4/glance-httpd/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.603271 4688 generic.go:334] "Generic (PLEG): container finished" podID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerID="9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55" exitCode=0 Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.603322 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8cqd" event={"ID":"a4cb30c2-06c5-449d-a55d-d379f3f09440","Type":"ContainerDied","Data":"9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55"} Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.612410 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d00dfb95-d6b9-42c5-bd68-91cba08b97b4/glance-log/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.795987 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_aa9f2c9d-a6e3-43fb-9601-ce24f5e89417/glance-httpd/0.log" Jan 23 19:47:24 crc kubenswrapper[4688]: I0123 19:47:24.798325 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_aa9f2c9d-a6e3-43fb-9601-ce24f5e89417/glance-log/0.log" Jan 23 19:47:25 crc kubenswrapper[4688]: I0123 19:47:25.026646 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689f6b4f86-pbwfh_56f27597-f638-4b6d-84e9-3a3671c089ac/horizon/1.log" Jan 23 19:47:25 crc kubenswrapper[4688]: I0123 19:47:25.305564 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689f6b4f86-pbwfh_56f27597-f638-4b6d-84e9-3a3671c089ac/horizon/0.log" Jan 23 19:47:25 crc kubenswrapper[4688]: I0123 19:47:25.544248 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-lmvm5_2cb10503-bf60-4049-a2b0-7299899692b0/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:25 crc kubenswrapper[4688]: I0123 19:47:25.619742 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8cqd" event={"ID":"a4cb30c2-06c5-449d-a55d-d379f3f09440","Type":"ContainerStarted","Data":"1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d"} Jan 23 19:47:25 crc kubenswrapper[4688]: I0123 19:47:25.667715 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c8cqd" podStartSLOduration=2.271672325 podStartE2EDuration="4.66768911s" podCreationTimestamp="2026-01-23 19:47:21 +0000 UTC" firstStartedPulling="2026-01-23 19:47:22.591397104 +0000 UTC m=+6037.587221545" lastFinishedPulling="2026-01-23 19:47:24.987413899 +0000 UTC m=+6039.983238330" observedRunningTime="2026-01-23 19:47:25.640722817 +0000 UTC m=+6040.636547288" watchObservedRunningTime="2026-01-23 19:47:25.66768911 +0000 UTC m=+6040.663513571" Jan 23 19:47:25 crc kubenswrapper[4688]: I0123 19:47:25.807028 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p52gn_e2222dda-2ac5-4212-9cb1-bb87bc961472/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:25 crc kubenswrapper[4688]: I0123 19:47:25.973591 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-689f6b4f86-pbwfh_56f27597-f638-4b6d-84e9-3a3671c089ac/horizon-log/0.log" Jan 23 19:47:26 crc kubenswrapper[4688]: I0123 19:47:26.323732 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486581-smm8p_c94d940e-9cfe-4bd3-bc70-fab5a68e0f20/keystone-cron/0.log" Jan 23 19:47:26 crc kubenswrapper[4688]: I0123 19:47:26.477164 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9ee596ca-3388-41b9-9651-b0f92e4b838c/kube-state-metrics/0.log" Jan 23 19:47:26 crc kubenswrapper[4688]: I0123 19:47:26.480029 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-788dd47598-8wt2n_cd02fba1-c4c0-4603-8801-92a63fa59f6a/keystone-api/0.log" Jan 23 19:47:26 crc kubenswrapper[4688]: I0123 19:47:26.652136 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4796d_30fe4fb5-c06c-4741-b83b-b5b6eef2603d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:27 crc kubenswrapper[4688]: I0123 19:47:27.289956 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b698d98c-7kjns_158df6c9-791b-411c-9405-74bf8eaa2995/neutron-api/0.log" Jan 23 19:47:27 crc kubenswrapper[4688]: I0123 19:47:27.657800 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b698d98c-7kjns_158df6c9-791b-411c-9405-74bf8eaa2995/neutron-httpd/0.log" Jan 23 19:47:27 crc kubenswrapper[4688]: I0123 19:47:27.735760 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-zj2zj_f57f805b-6978-40eb-81c7-32d1ebde0a3f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:28 crc kubenswrapper[4688]: I0123 19:47:28.378928 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c7588894-f33b-452c-abfc-7576e58fbe4b/nova-cell0-conductor-conductor/0.log" Jan 23 19:47:28 crc kubenswrapper[4688]: I0123 19:47:28.818260 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_635921a5-2c42-44a0-8c9d-b1f9d5230145/nova-cell1-conductor-conductor/0.log" Jan 23 19:47:29 crc kubenswrapper[4688]: I0123 19:47:29.049306 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e434f347-02aa-410e-a0c7-bcc65dee86ad/nova-api-log/0.log" Jan 23 19:47:29 crc kubenswrapper[4688]: I0123 19:47:29.314785 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fe552058-5e47-429c-ac41-e315827552ab/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 19:47:29 crc kubenswrapper[4688]: I0123 19:47:29.344368 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-j64r4_b1183bb9-7531-4cbc-b0b8-c3df2ba56953/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:29 crc kubenswrapper[4688]: I0123 19:47:29.741294 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8cf51a7-6a79-4d01-8b66-036e1f113df2/nova-metadata-log/0.log" Jan 23 19:47:29 crc kubenswrapper[4688]: I0123 19:47:29.755663 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e434f347-02aa-410e-a0c7-bcc65dee86ad/nova-api-api/0.log" Jan 23 19:47:29 crc kubenswrapper[4688]: I0123 19:47:29.971650 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697e30b7-f8ce-45c0-8299-b6021b11a639/mysql-bootstrap/0.log" Jan 23 19:47:30 crc kubenswrapper[4688]: I0123 19:47:30.196818 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697e30b7-f8ce-45c0-8299-b6021b11a639/mysql-bootstrap/0.log" Jan 23 19:47:30 crc kubenswrapper[4688]: I0123 19:47:30.305328 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697e30b7-f8ce-45c0-8299-b6021b11a639/galera/0.log" Jan 23 19:47:30 crc kubenswrapper[4688]: I0123 19:47:30.315752 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba03992a-5a6e-4f80-ad99-977cd7dc8854/nova-scheduler-scheduler/0.log" Jan 23 19:47:30 crc kubenswrapper[4688]: I0123 19:47:30.566067 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c805a15-64d3-4320-940e-a6859affbf9c/mysql-bootstrap/0.log" Jan 23 19:47:30 crc kubenswrapper[4688]: I0123 19:47:30.778549 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c805a15-64d3-4320-940e-a6859affbf9c/mysql-bootstrap/0.log" Jan 23 19:47:30 crc kubenswrapper[4688]: I0123 19:47:30.816709 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_4c805a15-64d3-4320-940e-a6859affbf9c/galera/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.025090 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5043fc78-cadf-4542-8673-2a02149409f9/openstackclient/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.093641 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2mkcg_cb62b62e-86fd-434f-be45-f29d9ae27c76/openstack-network-exporter/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.296565 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovsdb-server-init/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.451098 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.452213 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.505756 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovsdb-server/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.512398 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.544556 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovsdb-server-init/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.566942 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rjmgm_99ba3329-3970-44e1-b6b0-c4c6a6db2b96/ovs-vswitchd/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.776884 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.798113 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zl7mq_c58b6a90-e622-44bd-824a-7bc35f16190e/ovn-controller/0.log" Jan 23 19:47:31 crc kubenswrapper[4688]: I0123 19:47:31.840931 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8cqd"] Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.043165 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-288sf_2622f843-d555-43e1-b359-b490aab07eb2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.093363 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d4b65e4-7b44-449a-9505-c5bbc9f67c6c/openstack-network-exporter/0.log" Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.202534 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e8cf51a7-6a79-4d01-8b66-036e1f113df2/nova-metadata-metadata/0.log" Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.240849 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1d4b65e4-7b44-449a-9505-c5bbc9f67c6c/ovn-northd/0.log" Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.342952 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ed6ebe9c-b75e-42b7-81ce-70c82b890fa4/openstack-network-exporter/0.log" Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.444025 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ed6ebe9c-b75e-42b7-81ce-70c82b890fa4/ovsdbserver-nb/0.log" Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.779492 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_11d2a676-bc2c-43fe-8195-8ae8300f7c8c/openstack-network-exporter/0.log" Jan 23 19:47:32 crc kubenswrapper[4688]: I0123 19:47:32.863342 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_11d2a676-bc2c-43fe-8195-8ae8300f7c8c/ovsdbserver-sb/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.200213 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/init-config-reloader/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.229867 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df8898f5b-rfw5n_169bb621-8517-44d2-9193-1b75492e148f/placement-api/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.283260 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6df8898f5b-rfw5n_169bb621-8517-44d2-9193-1b75492e148f/placement-log/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.384158 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/init-config-reloader/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.422706 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/config-reloader/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.482950 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/prometheus/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.575909 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d4a7e167-5a90-4925-8004-520317d7826f/thanos-sidecar/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.663412 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_29a2e74d-781b-4d79-ae54-7a37c75adee5/setup-container/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.742945 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c8cqd" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="registry-server" containerID="cri-o://1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d" gracePeriod=2 Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.900000 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_29a2e74d-781b-4d79-ae54-7a37c75adee5/setup-container/0.log" Jan 23 19:47:33 crc kubenswrapper[4688]: I0123 19:47:33.911692 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_29a2e74d-781b-4d79-ae54-7a37c75adee5/rabbitmq/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.017302 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9829e8b2-ebbc-4326-8a8d-2ceef863a9db/setup-container/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.203060 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9829e8b2-ebbc-4326-8a8d-2ceef863a9db/setup-container/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.289754 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9829e8b2-ebbc-4326-8a8d-2ceef863a9db/rabbitmq/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.305650 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.331030 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6bqg\" (UniqueName: \"kubernetes.io/projected/a4cb30c2-06c5-449d-a55d-d379f3f09440-kube-api-access-w6bqg\") pod \"a4cb30c2-06c5-449d-a55d-d379f3f09440\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.331252 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-utilities\") pod \"a4cb30c2-06c5-449d-a55d-d379f3f09440\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.331327 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-catalog-content\") pod \"a4cb30c2-06c5-449d-a55d-d379f3f09440\" (UID: \"a4cb30c2-06c5-449d-a55d-d379f3f09440\") " Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.332075 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-utilities" (OuterVolumeSpecName: "utilities") pod "a4cb30c2-06c5-449d-a55d-d379f3f09440" (UID: "a4cb30c2-06c5-449d-a55d-d379f3f09440"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.338635 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4cb30c2-06c5-449d-a55d-d379f3f09440-kube-api-access-w6bqg" (OuterVolumeSpecName: "kube-api-access-w6bqg") pod "a4cb30c2-06c5-449d-a55d-d379f3f09440" (UID: "a4cb30c2-06c5-449d-a55d-d379f3f09440"). InnerVolumeSpecName "kube-api-access-w6bqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.372901 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bnprd_7506d9ea-fa02-4f06-b654-bb7857357a6f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.379792 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4cb30c2-06c5-449d-a55d-d379f3f09440" (UID: "a4cb30c2-06c5-449d-a55d-d379f3f09440"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.435152 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6bqg\" (UniqueName: \"kubernetes.io/projected/a4cb30c2-06c5-449d-a55d-d379f3f09440-kube-api-access-w6bqg\") on node \"crc\" DevicePath \"\"" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.435269 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.435289 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cb30c2-06c5-449d-a55d-d379f3f09440-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.600502 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-48cnk_d81fb34b-f44c-413e-af3a-2b6ed6f82fed/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.602416 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-ll67b_b11e8139-4a7d-4cda-8d54-0c88a360f046/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.758751 4688 generic.go:334] "Generic (PLEG): container finished" podID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerID="1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d" exitCode=0 Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.758807 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8cqd" event={"ID":"a4cb30c2-06c5-449d-a55d-d379f3f09440","Type":"ContainerDied","Data":"1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d"} Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.758843 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8cqd" event={"ID":"a4cb30c2-06c5-449d-a55d-d379f3f09440","Type":"ContainerDied","Data":"5656fea3cc4decef8c7e063a7e8ae8ef705b1413f71abfc4e4a1aa81f43ae366"} Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.758869 4688 scope.go:117] "RemoveContainer" containerID="1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.758878 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8cqd" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.785058 4688 scope.go:117] "RemoveContainer" containerID="9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.812682 4688 scope.go:117] "RemoveContainer" containerID="ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.844640 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8cqd"] Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.872821 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-f6tdm_90a8ac5e-520d-44bd-a129-ce6b0c0f2786/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.873080 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8cqd"] Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.893115 4688 scope.go:117] "RemoveContainer" containerID="1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d" Jan 23 19:47:34 crc kubenswrapper[4688]: E0123 19:47:34.893677 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d\": container with ID starting with 1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d not found: ID does not exist" containerID="1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.893701 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d"} err="failed to get container status \"1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d\": rpc error: code = NotFound desc = could not find container \"1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d\": container with ID starting with 1433ea57882208a06d120e938c0062b49ca73a79b82e863500ea23ceb252ad8d not found: ID does not exist" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.893724 4688 scope.go:117] "RemoveContainer" containerID="9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55" Jan 23 19:47:34 crc kubenswrapper[4688]: E0123 19:47:34.894655 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55\": container with ID starting with 9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55 not found: ID does not exist" containerID="9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.894801 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55"} err="failed to get container status \"9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55\": rpc error: code = NotFound desc = could not find container \"9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55\": container with ID starting with 9d0b9215a02b2531f77e9e379d086cc82fc8a4281a8e89109f221fdc59d4be55 not found: ID does not exist" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.894922 4688 scope.go:117] "RemoveContainer" containerID="ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c" Jan 23 19:47:34 crc kubenswrapper[4688]: E0123 19:47:34.896085 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c\": container with ID starting with ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c not found: ID does not exist" containerID="ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.896167 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c"} err="failed to get container status \"ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c\": rpc error: code = NotFound desc = could not find container \"ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c\": container with ID starting with ee1f13cb498f7924cbae2cc3c5b88bd9fa00de5cbf7b5881226b5418d22d3f9c not found: ID does not exist" Jan 23 19:47:34 crc kubenswrapper[4688]: I0123 19:47:34.956493 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5kb66_45add2ba-c382-4807-8995-43514182b85a/ssh-known-hosts-edpm-deployment/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.097872 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c564cf675-l776t_8985e53c-d4f0-4f9a-96be-a540d7279676/proxy-server/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.320473 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5c564cf675-l776t_8985e53c-d4f0-4f9a-96be-a540d7279676/proxy-httpd/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.329662 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vr6nh_d7367189-3db1-4176-8281-2b50a8b3df49/swift-ring-rebalance/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.372409 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" path="/var/lib/kubelet/pods/a4cb30c2-06c5-449d-a55d-d379f3f09440/volumes" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.468148 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-auditor/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.535319 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-reaper/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.600950 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-replicator/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.686876 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/account-server/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.691576 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-auditor/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.823035 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-replicator/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.874184 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-server/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.927044 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/container-updater/0.log" Jan 23 19:47:35 crc kubenswrapper[4688]: I0123 19:47:35.985257 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-auditor/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.107897 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-expirer/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.126390 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-replicator/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.160911 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-server/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.240490 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/object-updater/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.336079 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/rsync/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.356751 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:47:36 crc kubenswrapper[4688]: E0123 19:47:36.357067 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.577497 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ccb24002-aac7-4341-b434-58189d7792e5/swift-recon-cron/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.761566 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-j8kwx_fc299185-3ca0-4d2b-b24c-ab75fc65d49a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:36 crc kubenswrapper[4688]: I0123 19:47:36.860040 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_18226ae9-4f88-4376-a16d-b59b78912de7/tempest-tests-tempest-tests-runner/0.log" Jan 23 19:47:37 crc kubenswrapper[4688]: I0123 19:47:37.030746 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_53111825-5a43-4a5c-924a-39e6ded40854/test-operator-logs-container/0.log" Jan 23 19:47:37 crc kubenswrapper[4688]: I0123 19:47:37.047410 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-b4nck_e744642f-69d6-47a9-83a8-2cc90a504000/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 19:47:37 crc kubenswrapper[4688]: I0123 19:47:37.854300 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_245b0b2d-bf7c-4ac9-9fc3-f530a5cffead/watcher-applier/0.log" Jan 23 19:47:38 crc kubenswrapper[4688]: I0123 19:47:38.305529 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ded0f19f-c836-47bf-83f9-88634d30f76d/watcher-api-log/0.log" Jan 23 19:47:39 crc kubenswrapper[4688]: I0123 19:47:39.121604 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_0e25f1cb-df6e-441a-ba49-b8de51d05434/watcher-decision-engine/0.log" Jan 23 19:47:42 crc kubenswrapper[4688]: I0123 19:47:42.028547 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_ded0f19f-c836-47bf-83f9-88634d30f76d/watcher-api/0.log" Jan 23 19:47:42 crc kubenswrapper[4688]: I0123 19:47:42.989979 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1f6b6ab4-d03a-449f-af86-48e4f7cfbd1c/memcached/0.log" Jan 23 19:47:50 crc kubenswrapper[4688]: I0123 19:47:50.356737 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:47:50 crc kubenswrapper[4688]: E0123 19:47:50.359035 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:48:04 crc kubenswrapper[4688]: I0123 19:48:04.357369 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:48:04 crc kubenswrapper[4688]: E0123 19:48:04.358160 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.234127 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/util/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.411427 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/util/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.454050 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/pull/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.483541 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/pull/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.643408 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/util/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.649609 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/pull/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.649996 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2f995dbbe3260cefba1f5faad38584819214dc01075cb1683f9b62bf5es5nhx_05f53c55-f189-46fa-b193-2efbe87d3356/extract/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.909593 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-rmt2k_9c6839a5-f543-42e6-8c94-7138c1200112/manager/0.log" Jan 23 19:48:05 crc kubenswrapper[4688]: I0123 19:48:05.912521 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-2qzlh_bd62301c-d101-483c-8fe3-a1a5eddee7fc/manager/0.log" Jan 23 19:48:06 crc kubenswrapper[4688]: I0123 19:48:06.175119 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-wz5qj_e9c016a5-4953-4944-9f6e-f086e5a70918/manager/0.log" Jan 23 19:48:06 crc kubenswrapper[4688]: I0123 19:48:06.213846 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-q56fh_9ac53122-55ee-4db4-ad7c-8369e5117efe/manager/0.log" Jan 23 19:48:06 crc kubenswrapper[4688]: I0123 19:48:06.381133 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-v4qgl_be846838-ce35-4c14-a0ea-3a501d4ef6ac/manager/0.log" Jan 23 19:48:06 crc kubenswrapper[4688]: I0123 19:48:06.413048 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wt2bv_e53011a2-ea48-49f2-afbc-0d4bf71ae725/manager/0.log" Jan 23 19:48:06 crc kubenswrapper[4688]: I0123 19:48:06.584514 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-ztl8x_30cd4339-ab66-45e3-937d-b3d9b5c3ef62/manager/0.log" Jan 23 19:48:06 crc kubenswrapper[4688]: I0123 19:48:06.894294 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-q4wv8_cae5b14f-5f7e-477f-a17a-9ad3930c6862/manager/0.log" Jan 23 19:48:06 crc kubenswrapper[4688]: I0123 19:48:06.906273 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-kjh92_b0ecc6d1-2625-4fba-860a-3931984ec27a/manager/0.log" Jan 23 19:48:07 crc kubenswrapper[4688]: I0123 19:48:07.055659 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-q6tnb_6daaa808-ea3a-43fb-bff1-285cf870df77/manager/0.log" Jan 23 19:48:07 crc kubenswrapper[4688]: I0123 19:48:07.162208 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-hfhz6_4c19c0b3-d09e-4a51-8ac1-522bba6d7a5f/manager/0.log" Jan 23 19:48:07 crc kubenswrapper[4688]: I0123 19:48:07.338724 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-47x6q_5e61a329-1ac1-4162-9d68-f3086ec3f16e/manager/0.log" Jan 23 19:48:07 crc kubenswrapper[4688]: I0123 19:48:07.510480 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-k2g2j_676572f9-6a9f-4a4e-ae4c-8d8d300bf02a/manager/0.log" Jan 23 19:48:07 crc kubenswrapper[4688]: I0123 19:48:07.625914 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-mq2kk_1232d539-d6e5-4aa6-ac00-36be9120b247/manager/0.log" Jan 23 19:48:07 crc kubenswrapper[4688]: I0123 19:48:07.677571 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854s7w97_af851c54-521b-4a32-95fd-df9fd55d2eee/manager/0.log" Jan 23 19:48:08 crc kubenswrapper[4688]: I0123 19:48:08.120259 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68b845cd55-nswgt_a7210d87-1894-4295-b8bd-0189ea05db2c/operator/0.log" Jan 23 19:48:08 crc kubenswrapper[4688]: I0123 19:48:08.238839 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-z2jjg_491f4103-b520-4b84-9f90-a2d21d168a7a/registry-server/0.log" Jan 23 19:48:08 crc kubenswrapper[4688]: I0123 19:48:08.431058 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-6xgwb_f277821c-c358-4283-ad35-61b187fb0878/manager/0.log" Jan 23 19:48:08 crc kubenswrapper[4688]: I0123 19:48:08.779476 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-zk9c9_f53bddcc-3d14-4066-980c-dcfa14f2965e/manager/0.log" Jan 23 19:48:08 crc kubenswrapper[4688]: I0123 19:48:08.930982 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-qlqcd_8d9bd4af-849d-417f-9bbd-8e661b88d557/operator/0.log" Jan 23 19:48:09 crc kubenswrapper[4688]: I0123 19:48:09.172280 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-59bd4c58c8-qlfvx_d9d12e4b-1ea1-4d3d-ae3e-1eb1f39532fc/manager/0.log" Jan 23 19:48:09 crc kubenswrapper[4688]: I0123 19:48:09.206507 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-9p6ps_b058c042-b4f7-4470-82ec-4f5336b47992/manager/0.log" Jan 23 19:48:09 crc kubenswrapper[4688]: I0123 19:48:09.454872 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-l59kj_6e8fb123-6d73-47c6-9d23-930c6ba3de69/manager/0.log" Jan 23 19:48:09 crc kubenswrapper[4688]: I0123 19:48:09.477914 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-k6hng_55bb8a6a-0401-4cdc-92fb-595c5eeb5e55/manager/0.log" Jan 23 19:48:09 crc kubenswrapper[4688]: I0123 19:48:09.648364 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-679dc965c9-qrkxl_26066212-ab72-4450-b9b3-b08e6b43e333/manager/0.log" Jan 23 19:48:16 crc kubenswrapper[4688]: I0123 19:48:16.356965 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:48:16 crc kubenswrapper[4688]: E0123 19:48:16.359876 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:48:30 crc kubenswrapper[4688]: I0123 19:48:30.355791 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:48:30 crc kubenswrapper[4688]: E0123 19:48:30.356557 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:48:30 crc kubenswrapper[4688]: I0123 19:48:30.378271 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hdshg_4203f041-a5af-47a8-999b-329b617fe415/control-plane-machine-set-operator/0.log" Jan 23 19:48:30 crc kubenswrapper[4688]: I0123 19:48:30.533966 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mrcbl_10c46862-d70f-445e-82a8-f76c17326a8b/kube-rbac-proxy/0.log" Jan 23 19:48:30 crc kubenswrapper[4688]: I0123 19:48:30.596659 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mrcbl_10c46862-d70f-445e-82a8-f76c17326a8b/machine-api-operator/0.log" Jan 23 19:48:43 crc kubenswrapper[4688]: I0123 19:48:43.358106 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:48:43 crc kubenswrapper[4688]: E0123 19:48:43.361146 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:48:43 crc kubenswrapper[4688]: I0123 19:48:43.993542 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rsccw_9bf3e910-f2fd-4f92-b345-422c1570bd89/cert-manager-controller/0.log" Jan 23 19:48:44 crc kubenswrapper[4688]: I0123 19:48:44.262295 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vgqkz_5b8a9c18-732a-4bbe-a1b7-c24cc41bbf4e/cert-manager-cainjector/0.log" Jan 23 19:48:44 crc kubenswrapper[4688]: I0123 19:48:44.292273 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-q4zqg_893e289a-c400-40f2-b2cd-a9815c0cf488/cert-manager-webhook/0.log" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.445423 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-42dnb"] Jan 23 19:48:48 crc kubenswrapper[4688]: E0123 19:48:48.446417 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="extract-utilities" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.446431 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="extract-utilities" Jan 23 19:48:48 crc kubenswrapper[4688]: E0123 19:48:48.446451 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="extract-content" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.446459 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="extract-content" Jan 23 19:48:48 crc kubenswrapper[4688]: E0123 19:48:48.446473 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="registry-server" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.446479 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="registry-server" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.446685 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4cb30c2-06c5-449d-a55d-d379f3f09440" containerName="registry-server" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.448285 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.460039 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42dnb"] Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.574678 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-utilities\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.574846 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmgv5\" (UniqueName: \"kubernetes.io/projected/86d5aec5-c8ef-48b7-9768-72c7822240d7-kube-api-access-bmgv5\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.574935 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-catalog-content\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.677231 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-catalog-content\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.677354 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-utilities\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.677443 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmgv5\" (UniqueName: \"kubernetes.io/projected/86d5aec5-c8ef-48b7-9768-72c7822240d7-kube-api-access-bmgv5\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.678140 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-catalog-content\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.678389 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-utilities\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.699282 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmgv5\" (UniqueName: \"kubernetes.io/projected/86d5aec5-c8ef-48b7-9768-72c7822240d7-kube-api-access-bmgv5\") pod \"community-operators-42dnb\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:48 crc kubenswrapper[4688]: I0123 19:48:48.781744 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:49 crc kubenswrapper[4688]: I0123 19:48:49.371347 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-42dnb"] Jan 23 19:48:49 crc kubenswrapper[4688]: I0123 19:48:49.495115 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42dnb" event={"ID":"86d5aec5-c8ef-48b7-9768-72c7822240d7","Type":"ContainerStarted","Data":"bf197c21b64c965647adaf9017ba38b2c40ba2a3f53426c317683873065b7f3d"} Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.507888 4688 generic.go:334] "Generic (PLEG): container finished" podID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerID="cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634" exitCode=0 Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.507929 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42dnb" event={"ID":"86d5aec5-c8ef-48b7-9768-72c7822240d7","Type":"ContainerDied","Data":"cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634"} Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.658908 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qr22z"] Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.678725 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.717406 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qr22z"] Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.825308 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z94w\" (UniqueName: \"kubernetes.io/projected/e2e0caad-b195-483e-bec3-04c0412022ee-kube-api-access-9z94w\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.825434 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-catalog-content\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.825516 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-utilities\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.927487 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-utilities\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.927898 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z94w\" (UniqueName: \"kubernetes.io/projected/e2e0caad-b195-483e-bec3-04c0412022ee-kube-api-access-9z94w\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.928007 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-catalog-content\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.928008 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-utilities\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.928416 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-catalog-content\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:50 crc kubenswrapper[4688]: I0123 19:48:50.946976 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z94w\" (UniqueName: \"kubernetes.io/projected/e2e0caad-b195-483e-bec3-04c0412022ee-kube-api-access-9z94w\") pod \"certified-operators-qr22z\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:51 crc kubenswrapper[4688]: I0123 19:48:51.022245 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:48:51 crc kubenswrapper[4688]: I0123 19:48:51.520083 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42dnb" event={"ID":"86d5aec5-c8ef-48b7-9768-72c7822240d7","Type":"ContainerStarted","Data":"ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f"} Jan 23 19:48:51 crc kubenswrapper[4688]: I0123 19:48:51.583323 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qr22z"] Jan 23 19:48:52 crc kubenswrapper[4688]: I0123 19:48:52.532087 4688 generic.go:334] "Generic (PLEG): container finished" podID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerID="ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f" exitCode=0 Jan 23 19:48:52 crc kubenswrapper[4688]: I0123 19:48:52.532198 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42dnb" event={"ID":"86d5aec5-c8ef-48b7-9768-72c7822240d7","Type":"ContainerDied","Data":"ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f"} Jan 23 19:48:52 crc kubenswrapper[4688]: I0123 19:48:52.534868 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2e0caad-b195-483e-bec3-04c0412022ee" containerID="aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f" exitCode=0 Jan 23 19:48:52 crc kubenswrapper[4688]: I0123 19:48:52.534900 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr22z" event={"ID":"e2e0caad-b195-483e-bec3-04c0412022ee","Type":"ContainerDied","Data":"aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f"} Jan 23 19:48:52 crc kubenswrapper[4688]: I0123 19:48:52.534927 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr22z" event={"ID":"e2e0caad-b195-483e-bec3-04c0412022ee","Type":"ContainerStarted","Data":"4b45c3f64971546dcbfa37111bc94985f5eedeb6fed24f0dd36ac53758e360e7"} Jan 23 19:48:53 crc kubenswrapper[4688]: I0123 19:48:53.546996 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42dnb" event={"ID":"86d5aec5-c8ef-48b7-9768-72c7822240d7","Type":"ContainerStarted","Data":"1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132"} Jan 23 19:48:53 crc kubenswrapper[4688]: I0123 19:48:53.567287 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-42dnb" podStartSLOduration=3.107450677 podStartE2EDuration="5.567263472s" podCreationTimestamp="2026-01-23 19:48:48 +0000 UTC" firstStartedPulling="2026-01-23 19:48:50.509921999 +0000 UTC m=+6125.505746440" lastFinishedPulling="2026-01-23 19:48:52.969734794 +0000 UTC m=+6127.965559235" observedRunningTime="2026-01-23 19:48:53.565940894 +0000 UTC m=+6128.561765345" watchObservedRunningTime="2026-01-23 19:48:53.567263472 +0000 UTC m=+6128.563087913" Jan 23 19:48:54 crc kubenswrapper[4688]: I0123 19:48:54.559480 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr22z" event={"ID":"e2e0caad-b195-483e-bec3-04c0412022ee","Type":"ContainerStarted","Data":"57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf"} Jan 23 19:48:56 crc kubenswrapper[4688]: I0123 19:48:56.357307 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:48:56 crc kubenswrapper[4688]: E0123 19:48:56.358125 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:48:56 crc kubenswrapper[4688]: I0123 19:48:56.578972 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2e0caad-b195-483e-bec3-04c0412022ee" containerID="57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf" exitCode=0 Jan 23 19:48:56 crc kubenswrapper[4688]: I0123 19:48:56.579016 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr22z" event={"ID":"e2e0caad-b195-483e-bec3-04c0412022ee","Type":"ContainerDied","Data":"57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf"} Jan 23 19:48:57 crc kubenswrapper[4688]: I0123 19:48:57.600743 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr22z" event={"ID":"e2e0caad-b195-483e-bec3-04c0412022ee","Type":"ContainerStarted","Data":"ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8"} Jan 23 19:48:57 crc kubenswrapper[4688]: I0123 19:48:57.629360 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qr22z" podStartSLOduration=3.118249791 podStartE2EDuration="7.629337337s" podCreationTimestamp="2026-01-23 19:48:50 +0000 UTC" firstStartedPulling="2026-01-23 19:48:52.5387832 +0000 UTC m=+6127.534607641" lastFinishedPulling="2026-01-23 19:48:57.049870736 +0000 UTC m=+6132.045695187" observedRunningTime="2026-01-23 19:48:57.619659529 +0000 UTC m=+6132.615483960" watchObservedRunningTime="2026-01-23 19:48:57.629337337 +0000 UTC m=+6132.625161778" Jan 23 19:48:58 crc kubenswrapper[4688]: I0123 19:48:58.782457 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:58 crc kubenswrapper[4688]: I0123 19:48:58.782806 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:58 crc kubenswrapper[4688]: I0123 19:48:58.834962 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:59 crc kubenswrapper[4688]: I0123 19:48:59.499328 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wcxg2_0ba8c497-753e-46c1-b423-cd7cd1b3616e/nmstate-console-plugin/0.log" Jan 23 19:48:59 crc kubenswrapper[4688]: I0123 19:48:59.675768 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:48:59 crc kubenswrapper[4688]: I0123 19:48:59.754562 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4hkd8_a07585a4-2f3a-4062-9083-c64fcc9463a3/nmstate-handler/0.log" Jan 23 19:48:59 crc kubenswrapper[4688]: I0123 19:48:59.810510 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8dl6z_c65c520e-8672-463c-9337-3be6c949d06f/kube-rbac-proxy/0.log" Jan 23 19:48:59 crc kubenswrapper[4688]: I0123 19:48:59.872591 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8dl6z_c65c520e-8672-463c-9337-3be6c949d06f/nmstate-metrics/0.log" Jan 23 19:48:59 crc kubenswrapper[4688]: I0123 19:48:59.983260 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-l8trt_645d147f-c2e5-4c1f-b9ee-6a08aa8a1d81/nmstate-operator/0.log" Jan 23 19:49:00 crc kubenswrapper[4688]: I0123 19:49:00.093630 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wzgkn_c43497a2-9efb-47c2-b161-88cfe2b1aabb/nmstate-webhook/0.log" Jan 23 19:49:00 crc kubenswrapper[4688]: I0123 19:49:00.632578 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-42dnb"] Jan 23 19:49:01 crc kubenswrapper[4688]: I0123 19:49:01.023051 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:49:01 crc kubenswrapper[4688]: I0123 19:49:01.023909 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:49:01 crc kubenswrapper[4688]: I0123 19:49:01.086731 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:49:01 crc kubenswrapper[4688]: I0123 19:49:01.634330 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-42dnb" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="registry-server" containerID="cri-o://1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132" gracePeriod=2 Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.129536 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.287307 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-catalog-content\") pod \"86d5aec5-c8ef-48b7-9768-72c7822240d7\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.287680 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmgv5\" (UniqueName: \"kubernetes.io/projected/86d5aec5-c8ef-48b7-9768-72c7822240d7-kube-api-access-bmgv5\") pod \"86d5aec5-c8ef-48b7-9768-72c7822240d7\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.287795 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-utilities\") pod \"86d5aec5-c8ef-48b7-9768-72c7822240d7\" (UID: \"86d5aec5-c8ef-48b7-9768-72c7822240d7\") " Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.288530 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-utilities" (OuterVolumeSpecName: "utilities") pod "86d5aec5-c8ef-48b7-9768-72c7822240d7" (UID: "86d5aec5-c8ef-48b7-9768-72c7822240d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.294464 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86d5aec5-c8ef-48b7-9768-72c7822240d7-kube-api-access-bmgv5" (OuterVolumeSpecName: "kube-api-access-bmgv5") pod "86d5aec5-c8ef-48b7-9768-72c7822240d7" (UID: "86d5aec5-c8ef-48b7-9768-72c7822240d7"). InnerVolumeSpecName "kube-api-access-bmgv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.341857 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86d5aec5-c8ef-48b7-9768-72c7822240d7" (UID: "86d5aec5-c8ef-48b7-9768-72c7822240d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.390467 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.390515 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmgv5\" (UniqueName: \"kubernetes.io/projected/86d5aec5-c8ef-48b7-9768-72c7822240d7-kube-api-access-bmgv5\") on node \"crc\" DevicePath \"\"" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.390528 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86d5aec5-c8ef-48b7-9768-72c7822240d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.647452 4688 generic.go:334] "Generic (PLEG): container finished" podID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerID="1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132" exitCode=0 Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.647516 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42dnb" event={"ID":"86d5aec5-c8ef-48b7-9768-72c7822240d7","Type":"ContainerDied","Data":"1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132"} Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.647556 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-42dnb" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.647576 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-42dnb" event={"ID":"86d5aec5-c8ef-48b7-9768-72c7822240d7","Type":"ContainerDied","Data":"bf197c21b64c965647adaf9017ba38b2c40ba2a3f53426c317683873065b7f3d"} Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.647599 4688 scope.go:117] "RemoveContainer" containerID="1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.682379 4688 scope.go:117] "RemoveContainer" containerID="ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.693405 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-42dnb"] Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.706685 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-42dnb"] Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.719135 4688 scope.go:117] "RemoveContainer" containerID="cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.759291 4688 scope.go:117] "RemoveContainer" containerID="1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132" Jan 23 19:49:02 crc kubenswrapper[4688]: E0123 19:49:02.760267 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132\": container with ID starting with 1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132 not found: ID does not exist" containerID="1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.760315 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132"} err="failed to get container status \"1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132\": rpc error: code = NotFound desc = could not find container \"1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132\": container with ID starting with 1511f6caf54c286bde5667d940e663cfb3b1d98a9ffa90bfde8f355d8c5c7132 not found: ID does not exist" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.760347 4688 scope.go:117] "RemoveContainer" containerID="ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f" Jan 23 19:49:02 crc kubenswrapper[4688]: E0123 19:49:02.760734 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f\": container with ID starting with ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f not found: ID does not exist" containerID="ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.760778 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f"} err="failed to get container status \"ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f\": rpc error: code = NotFound desc = could not find container \"ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f\": container with ID starting with ea233ccf1dde51ac438c612b08d93cedc24a0de3119bd733d3b5e36289bf473f not found: ID does not exist" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.760815 4688 scope.go:117] "RemoveContainer" containerID="cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634" Jan 23 19:49:02 crc kubenswrapper[4688]: E0123 19:49:02.761298 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634\": container with ID starting with cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634 not found: ID does not exist" containerID="cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634" Jan 23 19:49:02 crc kubenswrapper[4688]: I0123 19:49:02.761327 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634"} err="failed to get container status \"cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634\": rpc error: code = NotFound desc = could not find container \"cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634\": container with ID starting with cd5ccf02801e696eb765ec28a0d31e4973da695a6d6c85a03e1c55269e8d6634 not found: ID does not exist" Jan 23 19:49:03 crc kubenswrapper[4688]: I0123 19:49:03.370026 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" path="/var/lib/kubelet/pods/86d5aec5-c8ef-48b7-9768-72c7822240d7/volumes" Jan 23 19:49:10 crc kubenswrapper[4688]: I0123 19:49:10.356482 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:49:10 crc kubenswrapper[4688]: E0123 19:49:10.357631 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:49:11 crc kubenswrapper[4688]: I0123 19:49:11.072690 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:49:11 crc kubenswrapper[4688]: I0123 19:49:11.119668 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qr22z"] Jan 23 19:49:11 crc kubenswrapper[4688]: I0123 19:49:11.739488 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qr22z" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="registry-server" containerID="cri-o://ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8" gracePeriod=2 Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.192912 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.304551 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z94w\" (UniqueName: \"kubernetes.io/projected/e2e0caad-b195-483e-bec3-04c0412022ee-kube-api-access-9z94w\") pod \"e2e0caad-b195-483e-bec3-04c0412022ee\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.304743 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-utilities\") pod \"e2e0caad-b195-483e-bec3-04c0412022ee\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.304818 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-catalog-content\") pod \"e2e0caad-b195-483e-bec3-04c0412022ee\" (UID: \"e2e0caad-b195-483e-bec3-04c0412022ee\") " Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.305748 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-utilities" (OuterVolumeSpecName: "utilities") pod "e2e0caad-b195-483e-bec3-04c0412022ee" (UID: "e2e0caad-b195-483e-bec3-04c0412022ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.311022 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e0caad-b195-483e-bec3-04c0412022ee-kube-api-access-9z94w" (OuterVolumeSpecName: "kube-api-access-9z94w") pod "e2e0caad-b195-483e-bec3-04c0412022ee" (UID: "e2e0caad-b195-483e-bec3-04c0412022ee"). InnerVolumeSpecName "kube-api-access-9z94w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.348862 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2e0caad-b195-483e-bec3-04c0412022ee" (UID: "e2e0caad-b195-483e-bec3-04c0412022ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.407827 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.407864 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e0caad-b195-483e-bec3-04c0412022ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.407878 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z94w\" (UniqueName: \"kubernetes.io/projected/e2e0caad-b195-483e-bec3-04c0412022ee-kube-api-access-9z94w\") on node \"crc\" DevicePath \"\"" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.752023 4688 generic.go:334] "Generic (PLEG): container finished" podID="e2e0caad-b195-483e-bec3-04c0412022ee" containerID="ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8" exitCode=0 Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.752084 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr22z" event={"ID":"e2e0caad-b195-483e-bec3-04c0412022ee","Type":"ContainerDied","Data":"ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8"} Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.752095 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr22z" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.752131 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr22z" event={"ID":"e2e0caad-b195-483e-bec3-04c0412022ee","Type":"ContainerDied","Data":"4b45c3f64971546dcbfa37111bc94985f5eedeb6fed24f0dd36ac53758e360e7"} Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.752156 4688 scope.go:117] "RemoveContainer" containerID="ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.772552 4688 scope.go:117] "RemoveContainer" containerID="57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.790835 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qr22z"] Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.795847 4688 scope.go:117] "RemoveContainer" containerID="aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.799755 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qr22z"] Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.842664 4688 scope.go:117] "RemoveContainer" containerID="ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8" Jan 23 19:49:12 crc kubenswrapper[4688]: E0123 19:49:12.843093 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8\": container with ID starting with ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8 not found: ID does not exist" containerID="ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.843133 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8"} err="failed to get container status \"ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8\": rpc error: code = NotFound desc = could not find container \"ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8\": container with ID starting with ee29165d36e0d93676b9e7f5f219c392be0e703592a779eaddc4f0cfc2ba9aa8 not found: ID does not exist" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.843156 4688 scope.go:117] "RemoveContainer" containerID="57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf" Jan 23 19:49:12 crc kubenswrapper[4688]: E0123 19:49:12.843461 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf\": container with ID starting with 57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf not found: ID does not exist" containerID="57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.843508 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf"} err="failed to get container status \"57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf\": rpc error: code = NotFound desc = could not find container \"57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf\": container with ID starting with 57d64f93d9da990adfde9cf67c62fad7f89de994becd17d65cfdb728270f73cf not found: ID does not exist" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.843529 4688 scope.go:117] "RemoveContainer" containerID="aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f" Jan 23 19:49:12 crc kubenswrapper[4688]: E0123 19:49:12.845544 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f\": container with ID starting with aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f not found: ID does not exist" containerID="aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f" Jan 23 19:49:12 crc kubenswrapper[4688]: I0123 19:49:12.845597 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f"} err="failed to get container status \"aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f\": rpc error: code = NotFound desc = could not find container \"aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f\": container with ID starting with aecbaef9abd37d6a642c61be3f13ca0e84e4ea0c57e1e140dbd7480980446f8f not found: ID does not exist" Jan 23 19:49:13 crc kubenswrapper[4688]: I0123 19:49:13.367609 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" path="/var/lib/kubelet/pods/e2e0caad-b195-483e-bec3-04c0412022ee/volumes" Jan 23 19:49:13 crc kubenswrapper[4688]: I0123 19:49:13.653777 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fkspw_505c5412-6a67-4596-ae6a-bbd51d146126/prometheus-operator/0.log" Jan 23 19:49:13 crc kubenswrapper[4688]: I0123 19:49:13.802765 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h_587391e1-2b8a-40a1-9106-cdda7cb8a2bd/prometheus-operator-admission-webhook/0.log" Jan 23 19:49:13 crc kubenswrapper[4688]: I0123 19:49:13.930196 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f_318d598f-84d5-418c-b820-d7ade7fcc8de/prometheus-operator-admission-webhook/0.log" Jan 23 19:49:14 crc kubenswrapper[4688]: I0123 19:49:14.022977 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-86gvw_8f8e5732-68b1-4f4e-906c-303e1eb20baf/operator/0.log" Jan 23 19:49:14 crc kubenswrapper[4688]: I0123 19:49:14.126142 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pgd8p_9cb38355-91e8-4856-abfa-b307e3f1909b/perses-operator/0.log" Jan 23 19:49:22 crc kubenswrapper[4688]: I0123 19:49:22.357117 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:49:22 crc kubenswrapper[4688]: E0123 19:49:22.357935 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:49:27 crc kubenswrapper[4688]: I0123 19:49:27.653980 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-89xj6_f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc/kube-rbac-proxy/0.log" Jan 23 19:49:27 crc kubenswrapper[4688]: I0123 19:49:27.816765 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-89xj6_f2a8c0fb-4afd-4594-85ef-db4cea0ce3bc/controller/0.log" Jan 23 19:49:27 crc kubenswrapper[4688]: I0123 19:49:27.920510 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.077651 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.097006 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.105543 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.150561 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.321342 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.363535 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.365696 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.383792 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.562699 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-frr-files/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.567117 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/controller/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.587936 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-reloader/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.593139 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/cp-metrics/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.802736 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/kube-rbac-proxy/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.832612 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/kube-rbac-proxy-frr/0.log" Jan 23 19:49:28 crc kubenswrapper[4688]: I0123 19:49:28.857999 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/frr-metrics/0.log" Jan 23 19:49:29 crc kubenswrapper[4688]: I0123 19:49:29.105590 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8kldf_183de16f-fe88-4b85-9c1c-980569d0a89d/frr-k8s-webhook-server/0.log" Jan 23 19:49:29 crc kubenswrapper[4688]: I0123 19:49:29.108369 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/reloader/0.log" Jan 23 19:49:29 crc kubenswrapper[4688]: I0123 19:49:29.371941 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-844488998d-d4vzw_1e8a4a5c-bbf0-404d-aada-461ca3e42d72/manager/0.log" Jan 23 19:49:29 crc kubenswrapper[4688]: I0123 19:49:29.609914 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6979454977-pw2fp_61d2f464-2eea-403d-a6e7-3a5bb3a067a5/webhook-server/0.log" Jan 23 19:49:29 crc kubenswrapper[4688]: I0123 19:49:29.614871 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zq5np_5950921c-c4d2-44ac-8fb9-853d22c0f04a/kube-rbac-proxy/0.log" Jan 23 19:49:30 crc kubenswrapper[4688]: I0123 19:49:30.343303 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zq5np_5950921c-c4d2-44ac-8fb9-853d22c0f04a/speaker/0.log" Jan 23 19:49:30 crc kubenswrapper[4688]: I0123 19:49:30.583785 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hmn2j_0b227eb2-6da0-43af-a365-a532ff4e4a86/frr/0.log" Jan 23 19:49:37 crc kubenswrapper[4688]: I0123 19:49:37.356992 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:49:37 crc kubenswrapper[4688]: E0123 19:49:37.358756 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:49:42 crc kubenswrapper[4688]: I0123 19:49:42.780271 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/util/0.log" Jan 23 19:49:42 crc kubenswrapper[4688]: I0123 19:49:42.977018 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/util/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.039142 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/pull/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.049085 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/pull/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.203246 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/pull/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.208936 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/util/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.247545 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdnskf_565c6f37-d514-4443-965d-f482233b748b/extract/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.387508 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/util/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.551118 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/pull/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.571367 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/util/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.589746 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/pull/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.753539 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/pull/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.754248 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/util/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.766941 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g54d2_17ee3cf3-5aa8-443c-b750-b01f9aa16af4/extract/0.log" Jan 23 19:49:43 crc kubenswrapper[4688]: I0123 19:49:43.932358 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/util/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.131168 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/util/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.132748 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/pull/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.151697 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/pull/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.315213 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/pull/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.326216 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/util/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.362541 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08cs96g_93e09072-68c1-41dd-8bf2-b939b18899b2/extract/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.499634 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-utilities/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.646089 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-utilities/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.650599 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-content/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.653715 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-content/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.833727 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-content/0.log" Jan 23 19:49:44 crc kubenswrapper[4688]: I0123 19:49:44.881975 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/extract-utilities/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.025086 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-utilities/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.282335 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-utilities/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.343470 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-content/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.345854 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-content/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.591090 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-utilities/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.681459 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/extract-content/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.782499 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6fb8d_a7f1dd62-ed20-4c0d-8166-14ecfa42faa8/registry-server/0.log" Jan 23 19:49:45 crc kubenswrapper[4688]: I0123 19:49:45.971869 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4gqq5_f9495fe1-3e6a-410d-8628-ebd588169767/marketplace-operator/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.094635 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-utilities/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.310040 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-utilities/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.313178 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-content/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.361041 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-content/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.600126 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-utilities/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.669974 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/extract-content/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.679744 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cf85r_4e81430c-65b3-4f6e-9986-8a16cbe69d67/registry-server/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.885225 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-utilities/0.log" Jan 23 19:49:46 crc kubenswrapper[4688]: I0123 19:49:46.913662 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n6tbc_4506f985-1626-4ad5-b924-74cd384786a2/registry-server/0.log" Jan 23 19:49:47 crc kubenswrapper[4688]: I0123 19:49:47.087395 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-content/0.log" Jan 23 19:49:47 crc kubenswrapper[4688]: I0123 19:49:47.096329 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-content/0.log" Jan 23 19:49:47 crc kubenswrapper[4688]: I0123 19:49:47.105732 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-utilities/0.log" Jan 23 19:49:47 crc kubenswrapper[4688]: I0123 19:49:47.265289 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-utilities/0.log" Jan 23 19:49:47 crc kubenswrapper[4688]: I0123 19:49:47.291947 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/extract-content/0.log" Jan 23 19:49:48 crc kubenswrapper[4688]: I0123 19:49:48.010731 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rp974_61b45c79-4271-40da-9245-bf36100d8d38/registry-server/0.log" Jan 23 19:49:50 crc kubenswrapper[4688]: I0123 19:49:50.357863 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:49:50 crc kubenswrapper[4688]: E0123 19:49:50.359726 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:50:00 crc kubenswrapper[4688]: I0123 19:50:00.074457 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-fkspw_505c5412-6a67-4596-ae6a-bbd51d146126/prometheus-operator/0.log" Jan 23 19:50:00 crc kubenswrapper[4688]: I0123 19:50:00.181163 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-wps9f_318d598f-84d5-418c-b820-d7ade7fcc8de/prometheus-operator-admission-webhook/0.log" Jan 23 19:50:00 crc kubenswrapper[4688]: I0123 19:50:00.186340 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6c8858c9cd-mcm5h_587391e1-2b8a-40a1-9106-cdda7cb8a2bd/prometheus-operator-admission-webhook/0.log" Jan 23 19:50:00 crc kubenswrapper[4688]: I0123 19:50:00.292643 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-86gvw_8f8e5732-68b1-4f4e-906c-303e1eb20baf/operator/0.log" Jan 23 19:50:00 crc kubenswrapper[4688]: I0123 19:50:00.353033 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pgd8p_9cb38355-91e8-4856-abfa-b307e3f1909b/perses-operator/0.log" Jan 23 19:50:01 crc kubenswrapper[4688]: I0123 19:50:01.357224 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:50:01 crc kubenswrapper[4688]: E0123 19:50:01.357804 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:50:06 crc kubenswrapper[4688]: E0123 19:50:06.858878 4688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.213:59968->38.129.56.213:41963: write tcp 38.129.56.213:59968->38.129.56.213:41963: write: broken pipe Jan 23 19:50:14 crc kubenswrapper[4688]: I0123 19:50:14.357239 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:50:14 crc kubenswrapper[4688]: E0123 19:50:14.358021 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:50:25 crc kubenswrapper[4688]: I0123 19:50:25.364419 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:50:25 crc kubenswrapper[4688]: E0123 19:50:25.365650 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:50:37 crc kubenswrapper[4688]: I0123 19:50:37.358240 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:50:37 crc kubenswrapper[4688]: E0123 19:50:37.359167 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:50:51 crc kubenswrapper[4688]: I0123 19:50:51.356499 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:50:51 crc kubenswrapper[4688]: E0123 19:50:51.357516 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:51:05 crc kubenswrapper[4688]: I0123 19:51:05.363593 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:51:05 crc kubenswrapper[4688]: E0123 19:51:05.364243 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:51:18 crc kubenswrapper[4688]: I0123 19:51:18.357589 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:51:18 crc kubenswrapper[4688]: E0123 19:51:18.358762 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:51:32 crc kubenswrapper[4688]: I0123 19:51:32.356851 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:51:32 crc kubenswrapper[4688]: E0123 19:51:32.358054 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:51:43 crc kubenswrapper[4688]: I0123 19:51:43.358056 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:51:43 crc kubenswrapper[4688]: E0123 19:51:43.359442 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:51:54 crc kubenswrapper[4688]: I0123 19:51:54.357714 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:51:54 crc kubenswrapper[4688]: E0123 19:51:54.358964 4688 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nkhx2_openshift-machine-config-operator(282fed6d-4a28-4498-add6-0240e6414dc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" Jan 23 19:51:59 crc kubenswrapper[4688]: I0123 19:51:59.904052 4688 scope.go:117] "RemoveContainer" containerID="4725e80ed635df0ba7c9135a6aae6d63009ef27e195a0e48bb4d823f1ae24972" Jan 23 19:52:03 crc kubenswrapper[4688]: I0123 19:52:03.594689 4688 generic.go:334] "Generic (PLEG): container finished" podID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerID="bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc" exitCode=0 Jan 23 19:52:03 crc kubenswrapper[4688]: I0123 19:52:03.594790 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7fxjq/must-gather-fpltz" event={"ID":"e60d2422-4bc0-4b1e-9659-0981cbe14bcc","Type":"ContainerDied","Data":"bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc"} Jan 23 19:52:03 crc kubenswrapper[4688]: I0123 19:52:03.595938 4688 scope.go:117] "RemoveContainer" containerID="bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc" Jan 23 19:52:04 crc kubenswrapper[4688]: I0123 19:52:04.106954 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7fxjq_must-gather-fpltz_e60d2422-4bc0-4b1e-9659-0981cbe14bcc/gather/0.log" Jan 23 19:52:09 crc kubenswrapper[4688]: I0123 19:52:09.357162 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:52:09 crc kubenswrapper[4688]: I0123 19:52:09.671838 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"e47cab40e74a51e4db8087425368c6f0eae2f11cc5184df0f723b29ea4a8d1e7"} Jan 23 19:52:17 crc kubenswrapper[4688]: I0123 19:52:17.793566 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7fxjq/must-gather-fpltz"] Jan 23 19:52:17 crc kubenswrapper[4688]: I0123 19:52:17.794425 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7fxjq/must-gather-fpltz" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerName="copy" containerID="cri-o://9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d" gracePeriod=2 Jan 23 19:52:17 crc kubenswrapper[4688]: I0123 19:52:17.803468 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7fxjq/must-gather-fpltz"] Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.346357 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7fxjq_must-gather-fpltz_e60d2422-4bc0-4b1e-9659-0981cbe14bcc/copy/0.log" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.347138 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.461020 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v75ds\" (UniqueName: \"kubernetes.io/projected/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-kube-api-access-v75ds\") pod \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.461091 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-must-gather-output\") pod \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\" (UID: \"e60d2422-4bc0-4b1e-9659-0981cbe14bcc\") " Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.468145 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-kube-api-access-v75ds" (OuterVolumeSpecName: "kube-api-access-v75ds") pod "e60d2422-4bc0-4b1e-9659-0981cbe14bcc" (UID: "e60d2422-4bc0-4b1e-9659-0981cbe14bcc"). InnerVolumeSpecName "kube-api-access-v75ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.564565 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v75ds\" (UniqueName: \"kubernetes.io/projected/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-kube-api-access-v75ds\") on node \"crc\" DevicePath \"\"" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.648120 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e60d2422-4bc0-4b1e-9659-0981cbe14bcc" (UID: "e60d2422-4bc0-4b1e-9659-0981cbe14bcc"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.670526 4688 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e60d2422-4bc0-4b1e-9659-0981cbe14bcc-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.754266 4688 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7fxjq_must-gather-fpltz_e60d2422-4bc0-4b1e-9659-0981cbe14bcc/copy/0.log" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.754608 4688 generic.go:334] "Generic (PLEG): container finished" podID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerID="9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d" exitCode=143 Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.754659 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7fxjq/must-gather-fpltz" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.754715 4688 scope.go:117] "RemoveContainer" containerID="9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.781437 4688 scope.go:117] "RemoveContainer" containerID="bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.887042 4688 scope.go:117] "RemoveContainer" containerID="9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d" Jan 23 19:52:18 crc kubenswrapper[4688]: E0123 19:52:18.887551 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d\": container with ID starting with 9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d not found: ID does not exist" containerID="9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.887774 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d"} err="failed to get container status \"9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d\": rpc error: code = NotFound desc = could not find container \"9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d\": container with ID starting with 9629c46e9a10b12cd7ab52029a61d57e1077613f8452dad1665b5351fc08b89d not found: ID does not exist" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.887802 4688 scope.go:117] "RemoveContainer" containerID="bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc" Jan 23 19:52:18 crc kubenswrapper[4688]: E0123 19:52:18.888081 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc\": container with ID starting with bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc not found: ID does not exist" containerID="bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc" Jan 23 19:52:18 crc kubenswrapper[4688]: I0123 19:52:18.888118 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc"} err="failed to get container status \"bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc\": rpc error: code = NotFound desc = could not find container \"bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc\": container with ID starting with bb621aa2aba3a34e9dc4e32efe9f9fbb7dc6658fdccd882ac722d929845045fc not found: ID does not exist" Jan 23 19:52:19 crc kubenswrapper[4688]: I0123 19:52:19.367080 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" path="/var/lib/kubelet/pods/e60d2422-4bc0-4b1e-9659-0981cbe14bcc/volumes" Jan 23 19:52:59 crc kubenswrapper[4688]: I0123 19:52:59.963314 4688 scope.go:117] "RemoveContainer" containerID="3b9426e89f1d9a3595c08033d624359ccc777644fd194322b1c1e8346af501c7" Jan 23 19:54:36 crc kubenswrapper[4688]: I0123 19:54:36.965969 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:54:36 crc kubenswrapper[4688]: I0123 19:54:36.966700 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:55:06 crc kubenswrapper[4688]: I0123 19:55:06.965364 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:55:06 crc kubenswrapper[4688]: I0123 19:55:06.966160 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:55:36 crc kubenswrapper[4688]: I0123 19:55:36.965671 4688 patch_prober.go:28] interesting pod/machine-config-daemon-nkhx2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 19:55:36 crc kubenswrapper[4688]: I0123 19:55:36.966117 4688 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 19:55:36 crc kubenswrapper[4688]: I0123 19:55:36.966166 4688 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" Jan 23 19:55:36 crc kubenswrapper[4688]: I0123 19:55:36.967037 4688 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e47cab40e74a51e4db8087425368c6f0eae2f11cc5184df0f723b29ea4a8d1e7"} pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 19:55:36 crc kubenswrapper[4688]: I0123 19:55:36.967105 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" podUID="282fed6d-4a28-4498-add6-0240e6414dc4" containerName="machine-config-daemon" containerID="cri-o://e47cab40e74a51e4db8087425368c6f0eae2f11cc5184df0f723b29ea4a8d1e7" gracePeriod=600 Jan 23 19:55:38 crc kubenswrapper[4688]: I0123 19:55:38.030699 4688 generic.go:334] "Generic (PLEG): container finished" podID="282fed6d-4a28-4498-add6-0240e6414dc4" containerID="e47cab40e74a51e4db8087425368c6f0eae2f11cc5184df0f723b29ea4a8d1e7" exitCode=0 Jan 23 19:55:38 crc kubenswrapper[4688]: I0123 19:55:38.030780 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerDied","Data":"e47cab40e74a51e4db8087425368c6f0eae2f11cc5184df0f723b29ea4a8d1e7"} Jan 23 19:55:38 crc kubenswrapper[4688]: I0123 19:55:38.031168 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nkhx2" event={"ID":"282fed6d-4a28-4498-add6-0240e6414dc4","Type":"ContainerStarted","Data":"5a2fd56f509c0f79fc284260eb1d37f802b60a0cce1396be5007d3248589d9e0"} Jan 23 19:55:38 crc kubenswrapper[4688]: I0123 19:55:38.031228 4688 scope.go:117] "RemoveContainer" containerID="56f61641941d804467964d87180199bc39e3174c950b61997784bcb0b9037e7a" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.007352 4688 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4qdsn"] Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008461 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerName="copy" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008477 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerName="copy" Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008492 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="registry-server" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008500 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="registry-server" Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008523 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="extract-content" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008532 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="extract-content" Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008552 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="registry-server" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008560 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="registry-server" Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008589 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="extract-utilities" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008598 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="extract-utilities" Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008614 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerName="gather" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008622 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerName="gather" Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008634 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="extract-utilities" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008642 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="extract-utilities" Jan 23 19:56:01 crc kubenswrapper[4688]: E0123 19:56:01.008660 4688 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="extract-content" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008669 4688 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="extract-content" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008908 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerName="copy" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008931 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e0caad-b195-483e-bec3-04c0412022ee" containerName="registry-server" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008949 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="86d5aec5-c8ef-48b7-9768-72c7822240d7" containerName="registry-server" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.008967 4688 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60d2422-4bc0-4b1e-9659-0981cbe14bcc" containerName="gather" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.011017 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.021722 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qdsn"] Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.059193 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmmm\" (UniqueName: \"kubernetes.io/projected/64f72300-cf3d-4f67-aaae-0bc543630f4c-kube-api-access-xsmmm\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.059293 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-catalog-content\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.059334 4688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-utilities\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.161500 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsmmm\" (UniqueName: \"kubernetes.io/projected/64f72300-cf3d-4f67-aaae-0bc543630f4c-kube-api-access-xsmmm\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.161605 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-catalog-content\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.161648 4688 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-utilities\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.162367 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-utilities\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.162448 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-catalog-content\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.191732 4688 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsmmm\" (UniqueName: \"kubernetes.io/projected/64f72300-cf3d-4f67-aaae-0bc543630f4c-kube-api-access-xsmmm\") pod \"redhat-operators-4qdsn\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.361329 4688 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:01 crc kubenswrapper[4688]: I0123 19:56:01.890753 4688 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4qdsn"] Jan 23 19:56:02 crc kubenswrapper[4688]: I0123 19:56:02.278107 4688 generic.go:334] "Generic (PLEG): container finished" podID="64f72300-cf3d-4f67-aaae-0bc543630f4c" containerID="64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6" exitCode=0 Jan 23 19:56:02 crc kubenswrapper[4688]: I0123 19:56:02.278430 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qdsn" event={"ID":"64f72300-cf3d-4f67-aaae-0bc543630f4c","Type":"ContainerDied","Data":"64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6"} Jan 23 19:56:02 crc kubenswrapper[4688]: I0123 19:56:02.278461 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qdsn" event={"ID":"64f72300-cf3d-4f67-aaae-0bc543630f4c","Type":"ContainerStarted","Data":"9206adbe8ede633d2c9a4799ea226af775b8bf6f3fe40a0fa944407b47b29759"} Jan 23 19:56:02 crc kubenswrapper[4688]: I0123 19:56:02.282497 4688 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 19:56:04 crc kubenswrapper[4688]: I0123 19:56:04.304717 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qdsn" event={"ID":"64f72300-cf3d-4f67-aaae-0bc543630f4c","Type":"ContainerStarted","Data":"c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92"} Jan 23 19:56:06 crc kubenswrapper[4688]: I0123 19:56:06.327315 4688 generic.go:334] "Generic (PLEG): container finished" podID="64f72300-cf3d-4f67-aaae-0bc543630f4c" containerID="c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92" exitCode=0 Jan 23 19:56:06 crc kubenswrapper[4688]: I0123 19:56:06.327662 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qdsn" event={"ID":"64f72300-cf3d-4f67-aaae-0bc543630f4c","Type":"ContainerDied","Data":"c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92"} Jan 23 19:56:07 crc kubenswrapper[4688]: I0123 19:56:07.339153 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qdsn" event={"ID":"64f72300-cf3d-4f67-aaae-0bc543630f4c","Type":"ContainerStarted","Data":"7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28"} Jan 23 19:56:07 crc kubenswrapper[4688]: I0123 19:56:07.367448 4688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4qdsn" podStartSLOduration=2.8895637990000003 podStartE2EDuration="7.367400641s" podCreationTimestamp="2026-01-23 19:56:00 +0000 UTC" firstStartedPulling="2026-01-23 19:56:02.282075764 +0000 UTC m=+6557.277900205" lastFinishedPulling="2026-01-23 19:56:06.759912606 +0000 UTC m=+6561.755737047" observedRunningTime="2026-01-23 19:56:07.360817672 +0000 UTC m=+6562.356642123" watchObservedRunningTime="2026-01-23 19:56:07.367400641 +0000 UTC m=+6562.363225082" Jan 23 19:56:11 crc kubenswrapper[4688]: I0123 19:56:11.369249 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:11 crc kubenswrapper[4688]: I0123 19:56:11.369802 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:12 crc kubenswrapper[4688]: I0123 19:56:12.409070 4688 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4qdsn" podUID="64f72300-cf3d-4f67-aaae-0bc543630f4c" containerName="registry-server" probeResult="failure" output=< Jan 23 19:56:12 crc kubenswrapper[4688]: timeout: failed to connect service ":50051" within 1s Jan 23 19:56:12 crc kubenswrapper[4688]: > Jan 23 19:56:21 crc kubenswrapper[4688]: I0123 19:56:21.407026 4688 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:21 crc kubenswrapper[4688]: I0123 19:56:21.465584 4688 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:21 crc kubenswrapper[4688]: I0123 19:56:21.654948 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qdsn"] Jan 23 19:56:22 crc kubenswrapper[4688]: I0123 19:56:22.499229 4688 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4qdsn" podUID="64f72300-cf3d-4f67-aaae-0bc543630f4c" containerName="registry-server" containerID="cri-o://7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28" gracePeriod=2 Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.001957 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.080665 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsmmm\" (UniqueName: \"kubernetes.io/projected/64f72300-cf3d-4f67-aaae-0bc543630f4c-kube-api-access-xsmmm\") pod \"64f72300-cf3d-4f67-aaae-0bc543630f4c\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.080784 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-utilities\") pod \"64f72300-cf3d-4f67-aaae-0bc543630f4c\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.080861 4688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-catalog-content\") pod \"64f72300-cf3d-4f67-aaae-0bc543630f4c\" (UID: \"64f72300-cf3d-4f67-aaae-0bc543630f4c\") " Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.082445 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-utilities" (OuterVolumeSpecName: "utilities") pod "64f72300-cf3d-4f67-aaae-0bc543630f4c" (UID: "64f72300-cf3d-4f67-aaae-0bc543630f4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.088461 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f72300-cf3d-4f67-aaae-0bc543630f4c-kube-api-access-xsmmm" (OuterVolumeSpecName: "kube-api-access-xsmmm") pod "64f72300-cf3d-4f67-aaae-0bc543630f4c" (UID: "64f72300-cf3d-4f67-aaae-0bc543630f4c"). InnerVolumeSpecName "kube-api-access-xsmmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.183760 4688 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsmmm\" (UniqueName: \"kubernetes.io/projected/64f72300-cf3d-4f67-aaae-0bc543630f4c-kube-api-access-xsmmm\") on node \"crc\" DevicePath \"\"" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.183811 4688 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.234965 4688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64f72300-cf3d-4f67-aaae-0bc543630f4c" (UID: "64f72300-cf3d-4f67-aaae-0bc543630f4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.285507 4688 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64f72300-cf3d-4f67-aaae-0bc543630f4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.510056 4688 generic.go:334] "Generic (PLEG): container finished" podID="64f72300-cf3d-4f67-aaae-0bc543630f4c" containerID="7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28" exitCode=0 Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.510107 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qdsn" event={"ID":"64f72300-cf3d-4f67-aaae-0bc543630f4c","Type":"ContainerDied","Data":"7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28"} Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.510138 4688 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4qdsn" event={"ID":"64f72300-cf3d-4f67-aaae-0bc543630f4c","Type":"ContainerDied","Data":"9206adbe8ede633d2c9a4799ea226af775b8bf6f3fe40a0fa944407b47b29759"} Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.510155 4688 scope.go:117] "RemoveContainer" containerID="7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.510161 4688 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4qdsn" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.537401 4688 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4qdsn"] Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.539530 4688 scope.go:117] "RemoveContainer" containerID="c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.560802 4688 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4qdsn"] Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.574328 4688 scope.go:117] "RemoveContainer" containerID="64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.626796 4688 scope.go:117] "RemoveContainer" containerID="7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28" Jan 23 19:56:23 crc kubenswrapper[4688]: E0123 19:56:23.627451 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28\": container with ID starting with 7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28 not found: ID does not exist" containerID="7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.627531 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28"} err="failed to get container status \"7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28\": rpc error: code = NotFound desc = could not find container \"7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28\": container with ID starting with 7c1cc8d3059347afba559903ad5c8e3580617d8d275520fd4f3ae73e99526b28 not found: ID does not exist" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.627594 4688 scope.go:117] "RemoveContainer" containerID="c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92" Jan 23 19:56:23 crc kubenswrapper[4688]: E0123 19:56:23.628956 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92\": container with ID starting with c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92 not found: ID does not exist" containerID="c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.628987 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92"} err="failed to get container status \"c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92\": rpc error: code = NotFound desc = could not find container \"c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92\": container with ID starting with c5476decce027a2439940751002a058ab4f2418d4d7871e5f9d4e111c0e1cd92 not found: ID does not exist" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.629009 4688 scope.go:117] "RemoveContainer" containerID="64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6" Jan 23 19:56:23 crc kubenswrapper[4688]: E0123 19:56:23.629542 4688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6\": container with ID starting with 64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6 not found: ID does not exist" containerID="64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6" Jan 23 19:56:23 crc kubenswrapper[4688]: I0123 19:56:23.629589 4688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6"} err="failed to get container status \"64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6\": rpc error: code = NotFound desc = could not find container \"64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6\": container with ID starting with 64c389cfac8084ad13dee3eeb93cc2481595d38e78234bbcec15ee0489770bd6 not found: ID does not exist" Jan 23 19:56:25 crc kubenswrapper[4688]: I0123 19:56:25.367715 4688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f72300-cf3d-4f67-aaae-0bc543630f4c" path="/var/lib/kubelet/pods/64f72300-cf3d-4f67-aaae-0bc543630f4c/volumes"